Dec 12 16:15:00 crc systemd[1]: Starting Kubernetes Kubelet... Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 16:15:00 crc kubenswrapper[5130]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.201854 5130 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204078 5130 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204116 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204126 5130 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204132 5130 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204138 5130 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204143 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204147 5130 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204152 5130 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204157 5130 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204161 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204165 5130 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204169 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204173 5130 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204196 5130 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204200 5130 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204204 5130 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204210 5130 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204214 5130 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204218 5130 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204224 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204228 5130 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204232 5130 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204237 5130 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204241 5130 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204245 5130 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204250 5130 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204255 5130 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204259 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204263 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204267 5130 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204271 5130 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204275 5130 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204279 5130 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204283 5130 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204288 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204292 5130 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204296 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204300 5130 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204304 5130 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204308 5130 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204313 5130 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204317 5130 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204321 5130 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204325 5130 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204329 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204333 5130 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204337 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204342 5130 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204346 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204352 5130 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204356 5130 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204360 5130 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204364 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204368 5130 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204372 5130 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204378 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204383 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204387 5130 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204391 5130 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204399 5130 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204404 5130 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204409 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204414 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204418 5130 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204422 5130 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204426 5130 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204431 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204435 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204439 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204443 5130 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204448 5130 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204452 5130 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204456 5130 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204461 5130 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204465 5130 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204470 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204474 5130 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204478 5130 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204485 5130 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204491 5130 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204496 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204500 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204504 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204508 5130 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204513 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.204517 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205407 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205418 5130 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205423 5130 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205428 5130 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205432 5130 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205448 5130 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205453 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205457 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205461 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205465 5130 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205469 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205473 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205477 5130 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205482 5130 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205486 5130 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205490 5130 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205494 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205498 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205502 5130 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205506 5130 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205510 5130 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205515 5130 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205519 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205523 5130 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205527 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205531 5130 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205535 5130 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205540 5130 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205544 5130 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205548 5130 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205552 5130 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205557 5130 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205563 5130 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205568 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205572 5130 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205576 5130 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205580 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205585 5130 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205591 5130 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205595 5130 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205599 5130 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205603 5130 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205608 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205612 5130 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205651 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205656 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205660 5130 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205664 5130 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205669 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205673 5130 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205677 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205681 5130 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205685 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205690 5130 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205698 5130 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205703 5130 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205708 5130 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205712 5130 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205717 5130 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205721 5130 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205725 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205729 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205734 5130 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205738 5130 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205742 5130 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205746 5130 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205751 5130 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205755 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205759 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205764 5130 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205768 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205781 5130 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205786 5130 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205790 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205795 5130 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205799 5130 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205803 5130 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205807 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205814 5130 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205819 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205824 5130 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205830 5130 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205835 5130 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205839 5130 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205843 5130 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.205848 5130 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205952 5130 flags.go:64] FLAG: --address="0.0.0.0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205963 5130 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205976 5130 flags.go:64] FLAG: --anonymous-auth="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205983 5130 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205989 5130 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.205994 5130 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206000 5130 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206007 5130 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206012 5130 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206017 5130 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206022 5130 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206028 5130 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206033 5130 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206038 5130 flags.go:64] FLAG: --cgroup-root="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206043 5130 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206048 5130 flags.go:64] FLAG: --client-ca-file="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206052 5130 flags.go:64] FLAG: --cloud-config="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206057 5130 flags.go:64] FLAG: --cloud-provider="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206071 5130 flags.go:64] FLAG: --cluster-dns="[]" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206080 5130 flags.go:64] FLAG: --cluster-domain="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206085 5130 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206126 5130 flags.go:64] FLAG: --config-dir="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206132 5130 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206137 5130 flags.go:64] FLAG: --container-log-max-files="5" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206144 5130 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206149 5130 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206154 5130 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206159 5130 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206175 5130 flags.go:64] FLAG: --contention-profiling="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206193 5130 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206198 5130 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206204 5130 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206209 5130 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206216 5130 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206221 5130 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206226 5130 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206230 5130 flags.go:64] FLAG: --enable-load-reader="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206235 5130 flags.go:64] FLAG: --enable-server="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206240 5130 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206252 5130 flags.go:64] FLAG: --event-burst="100" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206257 5130 flags.go:64] FLAG: --event-qps="50" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206261 5130 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206266 5130 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206271 5130 flags.go:64] FLAG: --eviction-hard="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206277 5130 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206282 5130 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206287 5130 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206292 5130 flags.go:64] FLAG: --eviction-soft="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206296 5130 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206301 5130 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206305 5130 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206318 5130 flags.go:64] FLAG: --experimental-mounter-path="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206323 5130 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206328 5130 flags.go:64] FLAG: --fail-swap-on="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206332 5130 flags.go:64] FLAG: --feature-gates="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206338 5130 flags.go:64] FLAG: --file-check-frequency="20s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206343 5130 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206348 5130 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206353 5130 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206357 5130 flags.go:64] FLAG: --healthz-port="10248" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206362 5130 flags.go:64] FLAG: --help="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206367 5130 flags.go:64] FLAG: --hostname-override="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206371 5130 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206380 5130 flags.go:64] FLAG: --http-check-frequency="20s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206384 5130 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206389 5130 flags.go:64] FLAG: --image-credential-provider-config="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206393 5130 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206397 5130 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206402 5130 flags.go:64] FLAG: --image-service-endpoint="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206406 5130 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206410 5130 flags.go:64] FLAG: --kube-api-burst="100" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206414 5130 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206419 5130 flags.go:64] FLAG: --kube-api-qps="50" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206424 5130 flags.go:64] FLAG: --kube-reserved="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206428 5130 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206435 5130 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206439 5130 flags.go:64] FLAG: --kubelet-cgroups="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206444 5130 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206448 5130 flags.go:64] FLAG: --lock-file="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206452 5130 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206457 5130 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206461 5130 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206469 5130 flags.go:64] FLAG: --log-json-split-stream="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206473 5130 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206479 5130 flags.go:64] FLAG: --log-text-split-stream="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206483 5130 flags.go:64] FLAG: --logging-format="text" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206488 5130 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206493 5130 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206498 5130 flags.go:64] FLAG: --manifest-url="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206502 5130 flags.go:64] FLAG: --manifest-url-header="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206509 5130 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206514 5130 flags.go:64] FLAG: --max-open-files="1000000" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206520 5130 flags.go:64] FLAG: --max-pods="110" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206525 5130 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206530 5130 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206540 5130 flags.go:64] FLAG: --memory-manager-policy="None" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206544 5130 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206550 5130 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206555 5130 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206560 5130 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206575 5130 flags.go:64] FLAG: --node-status-max-images="50" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206580 5130 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206585 5130 flags.go:64] FLAG: --oom-score-adj="-999" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206590 5130 flags.go:64] FLAG: --pod-cidr="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206595 5130 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206604 5130 flags.go:64] FLAG: --pod-manifest-path="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206608 5130 flags.go:64] FLAG: --pod-max-pids="-1" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206613 5130 flags.go:64] FLAG: --pods-per-core="0" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206620 5130 flags.go:64] FLAG: --port="10250" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206625 5130 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206630 5130 flags.go:64] FLAG: --provider-id="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206634 5130 flags.go:64] FLAG: --qos-reserved="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206639 5130 flags.go:64] FLAG: --read-only-port="10255" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206644 5130 flags.go:64] FLAG: --register-node="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206649 5130 flags.go:64] FLAG: --register-schedulable="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206654 5130 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206663 5130 flags.go:64] FLAG: --registry-burst="10" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206673 5130 flags.go:64] FLAG: --registry-qps="5" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206679 5130 flags.go:64] FLAG: --reserved-cpus="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206683 5130 flags.go:64] FLAG: --reserved-memory="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206689 5130 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206694 5130 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206698 5130 flags.go:64] FLAG: --rotate-certificates="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206703 5130 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206708 5130 flags.go:64] FLAG: --runonce="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206712 5130 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206717 5130 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206724 5130 flags.go:64] FLAG: --seccomp-default="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206729 5130 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206734 5130 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206739 5130 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206744 5130 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206749 5130 flags.go:64] FLAG: --storage-driver-password="root" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206754 5130 flags.go:64] FLAG: --storage-driver-secure="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206758 5130 flags.go:64] FLAG: --storage-driver-table="stats" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206763 5130 flags.go:64] FLAG: --storage-driver-user="root" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206767 5130 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206772 5130 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206777 5130 flags.go:64] FLAG: --system-cgroups="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206782 5130 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206792 5130 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206796 5130 flags.go:64] FLAG: --tls-cert-file="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206801 5130 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206811 5130 flags.go:64] FLAG: --tls-min-version="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206815 5130 flags.go:64] FLAG: --tls-private-key-file="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206820 5130 flags.go:64] FLAG: --topology-manager-policy="none" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206824 5130 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206829 5130 flags.go:64] FLAG: --topology-manager-scope="container" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206834 5130 flags.go:64] FLAG: --v="2" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206841 5130 flags.go:64] FLAG: --version="false" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206858 5130 flags.go:64] FLAG: --vmodule="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206865 5130 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.206871 5130 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.206992 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207000 5130 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207005 5130 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207010 5130 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207014 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207018 5130 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207023 5130 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207027 5130 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207031 5130 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207036 5130 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207040 5130 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207044 5130 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207048 5130 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207053 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207057 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207061 5130 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207065 5130 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207069 5130 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207074 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207081 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207086 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207090 5130 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207094 5130 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207099 5130 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207103 5130 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207107 5130 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207111 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207116 5130 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207120 5130 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207124 5130 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207136 5130 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207141 5130 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207145 5130 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207149 5130 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207153 5130 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207157 5130 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207162 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207166 5130 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207170 5130 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207175 5130 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207197 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207201 5130 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207205 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207210 5130 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207214 5130 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207218 5130 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207230 5130 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207235 5130 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207239 5130 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207243 5130 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207248 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207254 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207260 5130 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207266 5130 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207270 5130 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207275 5130 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207279 5130 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207283 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207288 5130 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207292 5130 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207297 5130 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207302 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207306 5130 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207318 5130 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207323 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207327 5130 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207331 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207335 5130 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207340 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207344 5130 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207348 5130 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207352 5130 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207356 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207360 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207365 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207369 5130 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207373 5130 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207379 5130 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207384 5130 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207388 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207393 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207397 5130 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207401 5130 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207408 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207412 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.207416 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.207430 5130 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.219416 5130 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.219473 5130 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219566 5130 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219603 5130 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219611 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219617 5130 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219622 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219627 5130 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219631 5130 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219636 5130 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219714 5130 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219722 5130 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219726 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219731 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219735 5130 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219740 5130 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219744 5130 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219749 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219753 5130 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219757 5130 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219762 5130 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219766 5130 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219771 5130 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219797 5130 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219801 5130 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219806 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219812 5130 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219817 5130 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219822 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219826 5130 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219830 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219842 5130 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219847 5130 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219852 5130 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219877 5130 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219882 5130 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219887 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219892 5130 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219897 5130 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219902 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219907 5130 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219912 5130 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219916 5130 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219921 5130 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219925 5130 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219959 5130 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219965 5130 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219969 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219973 5130 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219978 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219982 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219986 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219991 5130 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.219995 5130 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220000 5130 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220004 5130 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220009 5130 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220014 5130 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220038 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220043 5130 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220047 5130 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220051 5130 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220056 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220061 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220065 5130 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220069 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220073 5130 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220077 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220081 5130 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220086 5130 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220091 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220115 5130 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220120 5130 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220124 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220128 5130 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220132 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220135 5130 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220139 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220147 5130 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220151 5130 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220155 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220159 5130 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220163 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220167 5130 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220171 5130 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220198 5130 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220203 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220207 5130 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.220215 5130 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220443 5130 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220455 5130 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220459 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220465 5130 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220470 5130 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220476 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220480 5130 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220485 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220490 5130 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220494 5130 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220517 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220523 5130 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220527 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220531 5130 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220536 5130 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220540 5130 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220544 5130 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220548 5130 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220552 5130 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220556 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220560 5130 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220565 5130 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220571 5130 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220576 5130 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220601 5130 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220606 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220611 5130 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220615 5130 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220621 5130 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220628 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220633 5130 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220637 5130 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220641 5130 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220646 5130 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220650 5130 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220654 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220677 5130 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220682 5130 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220686 5130 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220691 5130 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220696 5130 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220700 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220705 5130 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220709 5130 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220713 5130 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220718 5130 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220783 5130 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220790 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220795 5130 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220799 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220804 5130 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220808 5130 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220812 5130 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220817 5130 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220822 5130 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220830 5130 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220835 5130 feature_gate.go:328] unrecognized feature gate: Example Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220840 5130 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220862 5130 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220868 5130 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220872 5130 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220877 5130 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220881 5130 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220885 5130 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220889 5130 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220894 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220898 5130 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220903 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220907 5130 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220912 5130 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220916 5130 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220941 5130 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220947 5130 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220951 5130 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220956 5130 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220960 5130 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220965 5130 feature_gate.go:328] unrecognized feature gate: Example2 Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220969 5130 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220973 5130 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220978 5130 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220982 5130 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220986 5130 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220991 5130 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220995 5130 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.220999 5130 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.221023 5130 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.221032 5130 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.221302 5130 server.go:962] "Client rotation is on, will bootstrap in background" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.225922 5130 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.229575 5130 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.229710 5130 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.230447 5130 server.go:1019] "Starting client certificate rotation" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.230564 5130 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.230633 5130 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.238527 5130 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.240563 5130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.240680 5130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.248210 5130 log.go:25] "Validated CRI v1 runtime API" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.269140 5130 log.go:25] "Validated CRI v1 image API" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.270710 5130 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.274523 5130 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-12-16-09-07-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.274568 5130 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.290508 5130 manager.go:217] Machine: {Timestamp:2025-12-12 16:15:00.289303895 +0000 UTC m=+0.186978747 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:6868605f-6684-4979-9a48-308ed352f6d0 BootID:e5f274e5-0ab6-408e-b0cf-1af5b029b864 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ac:f8:35 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ac:f8:35 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a8:13:c5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b2:c5:bf Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c0:69:a0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e1:5c:e6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:d8:81:61:ab:c6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:2d:94:09:37:69 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.290716 5130 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.290863 5130 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.291960 5130 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292000 5130 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292170 5130 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292200 5130 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292219 5130 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292375 5130 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292687 5130 state_mem.go:36] "Initialized new in-memory state store" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.292818 5130 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.293406 5130 kubelet.go:491] "Attempting to sync node with API server" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.293425 5130 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.293440 5130 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.293451 5130 kubelet.go:397] "Adding apiserver pod source" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.293480 5130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.295317 5130 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.295335 5130 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.295845 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.295852 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.296360 5130 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.296376 5130 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.297657 5130 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.297832 5130 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.298384 5130 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.298966 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.298984 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.298991 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.298998 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299009 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299016 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299026 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299034 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299041 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299053 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299085 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299210 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299485 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.299503 5130 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.300408 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.311566 5130 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.312001 5130 server.go:1295] "Started kubelet" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.312131 5130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.312353 5130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.312522 5130 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.313340 5130 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.313247 5130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188083eb3e18f004 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.311609348 +0000 UTC m=+0.209284180,LastTimestamp:2025-12-12 16:15:00.311609348 +0000 UTC m=+0.209284180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:00 crc systemd[1]: Started Kubernetes Kubelet. Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.314096 5130 server.go:317] "Adding debug handlers to kubelet server" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.315742 5130 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.315826 5130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.316298 5130 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.316324 5130 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.317341 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.317462 5130 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.317865 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.317969 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319084 5130 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319122 5130 factory.go:55] Registering systemd factory Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319134 5130 factory.go:223] Registration of the systemd container factory successfully Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319891 5130 factory.go:153] Registering CRI-O factory Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319918 5130 factory.go:223] Registration of the crio container factory successfully Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319947 5130 factory.go:103] Registering Raw factory Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.319964 5130 manager.go:1196] Started watching for new ooms in manager Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.320513 5130 manager.go:319] Starting recovery of all containers Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343074 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343125 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343134 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343145 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343153 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343162 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343186 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343196 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343208 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343226 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343235 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343244 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343253 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343262 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343273 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343283 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343292 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343301 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343309 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343317 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343326 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343333 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343342 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343350 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343358 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343368 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343376 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343384 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343395 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343406 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343416 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343425 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343435 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343444 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343454 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343463 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343471 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343479 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343488 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343498 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343509 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343538 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343547 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343555 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343574 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343585 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343594 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343603 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343616 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343627 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343635 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343643 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343651 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343661 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343670 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343679 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343696 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343705 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343714 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343751 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343763 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343772 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343781 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343790 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343799 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343807 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343820 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343830 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343837 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343845 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343854 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343861 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343868 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343876 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343884 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343894 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343906 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343915 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343923 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343931 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343938 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343945 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343954 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343962 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343969 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343978 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343986 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.343995 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344006 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344016 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344025 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344034 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344045 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344053 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344061 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344068 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344077 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344085 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344095 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344105 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344115 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344124 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344133 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.344141 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.346812 5130 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.346917 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.346940 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.346962 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.346984 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347009 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347022 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347040 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347054 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347115 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347134 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347149 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347165 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347206 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347222 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347234 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347247 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347285 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347298 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347316 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347331 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347348 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347362 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347374 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347386 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347396 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347410 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347424 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347439 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347453 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347471 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347482 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347494 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347505 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347518 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347529 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347541 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347553 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347564 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347577 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347587 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347598 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347608 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347618 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347628 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347639 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347649 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347660 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347672 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347683 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347695 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347709 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347721 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347735 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347747 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347760 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347773 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347786 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347800 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347811 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347837 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347854 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347868 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347884 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347900 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347916 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347932 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347948 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347963 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.347985 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348001 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348020 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348036 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348054 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348075 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348094 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348109 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348124 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348145 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348211 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348226 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348240 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348253 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348267 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348278 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348293 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348306 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348318 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348329 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348342 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348353 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348372 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348384 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348397 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348407 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348417 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348429 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348454 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348470 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348484 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348497 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348514 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348528 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348540 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348553 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348567 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348578 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348590 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348603 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348618 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348631 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348644 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348658 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348670 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348684 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348700 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348714 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348729 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348742 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348753 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348763 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348808 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348823 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348843 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348864 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348882 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348897 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348911 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348927 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348942 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348960 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348980 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.348993 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349006 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349015 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349026 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349039 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349052 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349066 5130 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349080 5130 reconstruct.go:97] "Volume reconstruction finished" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.349089 5130 reconciler.go:26] "Reconciler: start to sync state" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.351362 5130 manager.go:324] Recovery completed Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.362250 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.364289 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.364353 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.364368 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.365658 5130 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.365677 5130 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.365695 5130 state_mem.go:36] "Initialized new in-memory state store" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.366773 5130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.368435 5130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.368484 5130 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.368517 5130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.368528 5130 kubelet.go:2451] "Starting kubelet main sync loop" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.368575 5130 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.372162 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.375311 5130 policy_none.go:49] "None policy: Start" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.375373 5130 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.375396 5130 state_mem.go:35] "Initializing new in-memory state store" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.418446 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.447162 5130 manager.go:341] "Starting Device Plugin manager" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.447352 5130 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.447367 5130 server.go:85] "Starting device plugin registration server" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.448270 5130 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.448296 5130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.448969 5130 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.449089 5130 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.449106 5130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.457859 5130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.458542 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.469155 5130 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.469518 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.470791 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.470862 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.470876 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472028 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472208 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472270 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472861 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472904 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.472930 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.473368 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.473391 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.473407 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.475237 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.475592 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.475646 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476325 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476349 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476359 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476585 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476622 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.476642 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.478078 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.478874 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.478919 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479326 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479366 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479378 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479637 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479675 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.479686 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480009 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480139 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480203 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480445 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480465 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480475 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480796 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480831 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.480843 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.481160 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.481204 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.481568 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.481593 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.481610 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.499634 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.506639 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.518935 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.527791 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.543104 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.548144 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.548891 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.550754 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.550797 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.550807 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.550831 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.551451 5130 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553169 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553273 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553312 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553335 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553377 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.553402 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554071 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554261 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554297 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554346 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554504 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554636 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554677 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554715 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554827 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554882 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554916 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.554966 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555212 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555223 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555504 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555526 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555501 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555823 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.555972 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.556076 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.556103 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.556590 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.556877 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.557059 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657494 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657500 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657568 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657599 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657636 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657665 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657669 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657696 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657724 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657755 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657798 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657820 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657828 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657855 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657874 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657889 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657900 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657922 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657951 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657968 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657987 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657990 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658021 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658041 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658054 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.657925 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658077 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658111 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658120 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658150 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658172 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.658228 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.752115 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.753564 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.753611 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.753623 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.753642 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.754017 5130 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.800110 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.807827 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.829227 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.830943 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-de7b6704448ea28c16ba89972353195b3ca79b4a5164f77cf05273a06ce8e25b WatchSource:0}: Error finding container de7b6704448ea28c16ba89972353195b3ca79b4a5164f77cf05273a06ce8e25b: Status 404 returned error can't find the container with id de7b6704448ea28c16ba89972353195b3ca79b4a5164f77cf05273a06ce8e25b Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.832788 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-34dc5e9d806476ce531556e858a5e2b2cece062c612c841b621cda5b6f4c7a49 WatchSource:0}: Error finding container 34dc5e9d806476ce531556e858a5e2b2cece062c612c841b621cda5b6f4c7a49: Status 404 returned error can't find the container with id 34dc5e9d806476ce531556e858a5e2b2cece062c612c841b621cda5b6f4c7a49 Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.835526 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.843811 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: W1212 16:15:00.844694 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-85f8b3d9b37f8e15c6b95bd9ac6402ce9fc5bdd3698114a49ae52ab1391ea885 WatchSource:0}: Error finding container 85f8b3d9b37f8e15c6b95bd9ac6402ce9fc5bdd3698114a49ae52ab1391ea885: Status 404 returned error can't find the container with id 85f8b3d9b37f8e15c6b95bd9ac6402ce9fc5bdd3698114a49ae52ab1391ea885 Dec 12 16:15:00 crc kubenswrapper[5130]: I1212 16:15:00.849425 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:00 crc kubenswrapper[5130]: E1212 16:15:00.920486 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.129515 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.154932 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.157505 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.157557 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.157568 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.157595 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.158108 5130 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.243044 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.301987 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.377986 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"de7b6704448ea28c16ba89972353195b3ca79b4a5164f77cf05273a06ce8e25b"} Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.379138 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6f67f4de129688295e50ba0e4ef5f4f78c8dbd0cad6d1fd1e699b4b43e6c26ce"} Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.379909 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"11ff62e5bf2776746ce2a8dd8af60670ec5bec17939ae48c005c772961e2f23b"} Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.380654 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"85f8b3d9b37f8e15c6b95bd9ac6402ce9fc5bdd3698114a49ae52ab1391ea885"} Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.381988 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"34dc5e9d806476ce531556e858a5e2b2cece062c612c841b621cda5b6f4c7a49"} Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.473436 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.721401 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.852926 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.958274 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.959327 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.959368 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.959381 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:01 crc kubenswrapper[5130]: I1212 16:15:01.959411 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:01 crc kubenswrapper[5130]: E1212 16:15:01.959873 5130 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.301290 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.316260 5130 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:02 crc kubenswrapper[5130]: E1212 16:15:02.317572 5130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.388084 5130 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="0dc94b247ff9bd0f3ef35dd695c1cf1c1e827d31b340b6cf9a365bb0fd9a4d61" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.388222 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"0dc94b247ff9bd0f3ef35dd695c1cf1c1e827d31b340b6cf9a365bb0fd9a4d61"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.388491 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.389747 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.389787 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.389797 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:02 crc kubenswrapper[5130]: E1212 16:15:02.390045 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.390977 5130 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="79e86050c0bb4b8c438f3446d4ac411026fcf52903b4ef55a079a1dfc8e41ace" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.391397 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"79e86050c0bb4b8c438f3446d4ac411026fcf52903b4ef55a079a1dfc8e41ace"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.391427 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.391852 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.391882 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.391890 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:02 crc kubenswrapper[5130]: E1212 16:15:02.392043 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.393957 5130 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="ac19827bea2108e590e57983557d3f9158fd935eedbff3452bfcf0437e6d4ebc" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.394025 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"ac19827bea2108e590e57983557d3f9158fd935eedbff3452bfcf0437e6d4ebc"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.394104 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.394753 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.394785 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.394797 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:02 crc kubenswrapper[5130]: E1212 16:15:02.394996 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.396006 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ad212f911518bdcc0e46bbe51292c8675d6eaf9ff02549547d8c35be6da8a3d5"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.396041 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8f4b79b1b2b016b2f05ce1eb552b7f562fe2b200053382577073b9746227781c"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.396052 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d06abcd904ffab6d0e7ef275a88cc4d48ca01cbaf45c12b67e4ce3961c69e34f"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.397411 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6dba3c0695675d41f391363533d51f6311cd8233a6619881a3913b8726c0f824" exitCode=0 Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.397439 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6dba3c0695675d41f391363533d51f6311cd8233a6619881a3913b8726c0f824"} Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.397545 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.398008 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.398039 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.398050 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:02 crc kubenswrapper[5130]: E1212 16:15:02.398233 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.400863 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.402472 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.402509 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:02 crc kubenswrapper[5130]: I1212 16:15:02.402522 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.176129 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.301712 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.322200 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.401668 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"8cd154bfa81490323b7f5172029ebfba0bc643278a95dfb6bfe97ab4ae3d4c67"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.401732 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.402647 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.402686 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.402696 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.403028 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.404292 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b4d07f13a9fe0e83c34c0425a9e6d5caa0ed5d9ef8b200e2b45b1aa3882ee3cc"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.404337 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6854378cf805fce7df5492be1ae30bc52f8aca905045464aa4db144246de92d1"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.404350 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e1386908a4c2cb67a9fce9909e6c1675ffbcbdb90691dc350e3ed7bd7afa8fea"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.406136 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b218d02334c8f81397f2f6b9c264419d6ec78b17441587c786444979c8fd4db8"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.406238 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.406919 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.406950 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.406959 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.407165 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.408016 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"818cbab9fa2109ab2203469a2d7999f6b39f7f70722424aa9e78038d779eb741"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.408038 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f1a01912ddee091b284981f73500faf3fcfd7a1071596baf5cd12e42fadf2802"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.409409 5130 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e737ef97141c311fc520d5548b5d0fa0f9791a5a26bb75389c9ba72e210d5ece" exitCode=0 Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.409446 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e737ef97141c311fc520d5548b5d0fa0f9791a5a26bb75389c9ba72e210d5ece"} Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.409550 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.410116 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.410148 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.410160 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.410350 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.512286 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.560019 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.560912 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.560953 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.560968 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:03 crc kubenswrapper[5130]: I1212 16:15:03.560991 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.561531 5130 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Dec 12 16:15:03 crc kubenswrapper[5130]: E1212 16:15:03.710856 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.424471 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"96c12daa01120f19be833f82d5f8c18b27d7dc4c74ac5543dd248efa1a9301d1"} Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.424515 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3f84b80c2f32e68a8eb79916fece466ce160a92d4d9b989d1bfd37673b951c48"} Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.430054 5130 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="05cf752c30fead512f852bc577d4a8c2151f410f239c8098b1da8a7f204f94f4" exitCode=0 Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.430303 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.430380 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.430697 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"05cf752c30fead512f852bc577d4a8c2151f410f239c8098b1da8a7f204f94f4"} Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.430786 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431004 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431172 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431236 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431250 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431419 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431455 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431467 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431640 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:04 crc kubenswrapper[5130]: E1212 16:15:04.431675 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431680 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.431711 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:04 crc kubenswrapper[5130]: E1212 16:15:04.431804 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:04 crc kubenswrapper[5130]: E1212 16:15:04.431904 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.432394 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.432438 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:04 crc kubenswrapper[5130]: I1212 16:15:04.432452 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:04 crc kubenswrapper[5130]: E1212 16:15:04.432683 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.004599 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.435213 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ff0c1a4be90d7170c4d1e101c972d2fff29828d4f3beebcba252b25fd5207de3"} Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.435369 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.435867 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.435971 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.435981 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:05 crc kubenswrapper[5130]: E1212 16:15:05.436207 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437109 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b5b81097d5d72c60e20547fc181ec92d257861d0f03b88a77e50c0fd85c18f9f"} Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437158 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0463b42147135b47f3c30fe91c661361f17208126f55c37f09085757f90b532a"} Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437202 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8b6cca39248203092b7da19e1fbdcc41fccbf547de52c3cc91dc227aa12b200f"} Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437241 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437741 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437768 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:05 crc kubenswrapper[5130]: I1212 16:15:05.437777 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:05 crc kubenswrapper[5130]: E1212 16:15:05.438033 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.265441 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.265662 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.266375 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.266415 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.266426 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5130]: E1212 16:15:06.266727 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.444383 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"dc99e3b4e8b1b5293eb3c7edfb74313dd878dcbeaab60b6531bad449253b5c8f"} Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.444500 5130 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.444550 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.445036 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.445091 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.445106 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5130]: E1212 16:15:06.445523 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.672859 5130 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.762666 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.763858 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.763913 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.763927 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.763958 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:06 crc kubenswrapper[5130]: I1212 16:15:06.771678 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.215675 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.291925 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451009 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b49b27e73f8ffadb6c3f2ba01c6f20a9811a2d2e1bef1e979564e97d5211098c"} Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451205 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451282 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451844 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451882 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.451897 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.452401 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.452440 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.452454 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:07 crc kubenswrapper[5130]: E1212 16:15:07.452450 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:07 crc kubenswrapper[5130]: E1212 16:15:07.452652 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.922772 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.965948 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.966206 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.967632 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.967664 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:07 crc kubenswrapper[5130]: I1212 16:15:07.967674 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:07 crc kubenswrapper[5130]: E1212 16:15:07.967982 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.450385 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453085 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453168 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453239 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453728 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453781 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453794 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453896 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453934 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.453946 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5130]: E1212 16:15:08.454161 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.454203 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.454225 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:08 crc kubenswrapper[5130]: I1212 16:15:08.454235 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:08 crc kubenswrapper[5130]: E1212 16:15:08.454584 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:08 crc kubenswrapper[5130]: E1212 16:15:08.454842 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5130]: I1212 16:15:09.454715 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:09 crc kubenswrapper[5130]: I1212 16:15:09.455302 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:09 crc kubenswrapper[5130]: I1212 16:15:09.455328 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:09 crc kubenswrapper[5130]: I1212 16:15:09.455338 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:09 crc kubenswrapper[5130]: E1212 16:15:09.455704 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:09 crc kubenswrapper[5130]: I1212 16:15:09.541362 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 12 16:15:10 crc kubenswrapper[5130]: I1212 16:15:10.455957 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:10 crc kubenswrapper[5130]: I1212 16:15:10.457047 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:10 crc kubenswrapper[5130]: I1212 16:15:10.457096 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:10 crc kubenswrapper[5130]: I1212 16:15:10.457108 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:10 crc kubenswrapper[5130]: E1212 16:15:10.457553 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:10 crc kubenswrapper[5130]: E1212 16:15:10.458696 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:11 crc kubenswrapper[5130]: I1212 16:15:11.450421 5130 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 12 16:15:11 crc kubenswrapper[5130]: I1212 16:15:11.450528 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.972266 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.972651 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.973855 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.973921 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.973934 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:12 crc kubenswrapper[5130]: E1212 16:15:12.974476 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:12 crc kubenswrapper[5130]: I1212 16:15:12.979090 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:13 crc kubenswrapper[5130]: I1212 16:15:13.462770 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:13 crc kubenswrapper[5130]: I1212 16:15:13.463454 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:13 crc kubenswrapper[5130]: I1212 16:15:13.463489 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:13 crc kubenswrapper[5130]: I1212 16:15:13.463499 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:13 crc kubenswrapper[5130]: E1212 16:15:13.463744 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:13 crc kubenswrapper[5130]: I1212 16:15:13.467657 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.302874 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.465491 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.466300 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.466371 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.466390 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:14 crc kubenswrapper[5130]: E1212 16:15:14.466877 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.524802 5130 trace.go:236] Trace[1679436566]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:04.523) (total time: 10001ms): Dec 12 16:15:14 crc kubenswrapper[5130]: Trace[1679436566]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:15:14.524) Dec 12 16:15:14 crc kubenswrapper[5130]: Trace[1679436566]: [10.001283075s] [10.001283075s] END Dec 12 16:15:14 crc kubenswrapper[5130]: E1212 16:15:14.524869 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.628466 5130 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.628576 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.633403 5130 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.633516 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.685334 5130 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 16:15:14 crc kubenswrapper[5130]: I1212 16:15:14.685504 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 16:15:16 crc kubenswrapper[5130]: E1212 16:15:16.523951 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.221412 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.221717 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.222102 5130 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.222186 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.222769 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.222815 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.222847 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:17 crc kubenswrapper[5130]: E1212 16:15:17.223262 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.226537 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.472289 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.473107 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.473166 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.473199 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.473212 5130 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.473294 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 12 16:15:17 crc kubenswrapper[5130]: E1212 16:15:17.473579 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.947934 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.949076 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.950329 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.950387 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.950399 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:17 crc kubenswrapper[5130]: E1212 16:15:17.950896 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:17 crc kubenswrapper[5130]: I1212 16:15:17.968868 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 12 16:15:18 crc kubenswrapper[5130]: I1212 16:15:18.474889 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:18 crc kubenswrapper[5130]: I1212 16:15:18.475583 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:18 crc kubenswrapper[5130]: I1212 16:15:18.475641 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:18 crc kubenswrapper[5130]: I1212 16:15:18.475653 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:18 crc kubenswrapper[5130]: E1212 16:15:18.476147 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:19 crc kubenswrapper[5130]: I1212 16:15:19.617354 5130 trace.go:236] Trace[1840035170]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:08.055) (total time: 11561ms): Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[1840035170]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 11561ms (16:15:19.617) Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[1840035170]: [11.56197977s] [11.56197977s] END Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.617405 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:19 crc kubenswrapper[5130]: I1212 16:15:19.617463 5130 trace.go:236] Trace[1760293730]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:08.722) (total time: 10894ms): Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[1760293730]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 10894ms (16:15:19.617) Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[1760293730]: [10.89447609s] [10.89447609s] END Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.617422 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb3e18f004 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.311609348 +0000 UTC m=+0.209284180,LastTimestamp:2025-12-12 16:15:00.311609348 +0000 UTC m=+0.209284180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.617516 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:19 crc kubenswrapper[5130]: I1212 16:15:19.617556 5130 trace.go:236] Trace[444023385]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (12-Dec-2025 16:15:09.184) (total time: 10432ms): Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[444023385]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 10432ms (16:15:19.617) Dec 12 16:15:19 crc kubenswrapper[5130]: Trace[444023385]: [10.432552562s] [10.432552562s] END Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.617571 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.617903 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.621725 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.626381 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.630477 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: I1212 16:15:19.633049 5130 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.635797 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb4694cf64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.453945188 +0000 UTC m=+0.351620020,LastTimestamp:2025-12-12 16:15:00.453945188 +0000 UTC m=+0.351620020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.645400 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.47083091 +0000 UTC m=+0.368505742,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.650308 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.47087118 +0000 UTC m=+0.368546012,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.659201 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.47088104 +0000 UTC m=+0.368555872,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.672448 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.472890647 +0000 UTC m=+0.370565479,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.677876 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.472912407 +0000 UTC m=+0.370587239,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.682925 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.472936188 +0000 UTC m=+0.370611020,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.688293 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.473383153 +0000 UTC m=+0.371057985,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.693409 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.473402514 +0000 UTC m=+0.371077336,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.700190 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.473415724 +0000 UTC m=+0.371090556,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.712960 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.476342212 +0000 UTC m=+0.374017044,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.724864 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.476354462 +0000 UTC m=+0.374029294,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.731133 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.476364623 +0000 UTC m=+0.374039455,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.739370 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.476611066 +0000 UTC m=+0.374285898,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.755627 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.476628946 +0000 UTC m=+0.374303778,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.769416 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.476661066 +0000 UTC m=+0.374335898,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.776495 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.479350462 +0000 UTC m=+0.377025294,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.784293 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.479374032 +0000 UTC m=+0.377048864,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.790120 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413e1722\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413e1722 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364375842 +0000 UTC m=+0.262050684,LastTimestamp:2025-12-12 16:15:00.479384192 +0000 UTC m=+0.377059024,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.795456 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413d7163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413d7163 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364333411 +0000 UTC m=+0.262008243,LastTimestamp:2025-12-12 16:15:00.479657586 +0000 UTC m=+0.377332418,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.801477 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188083eb413de003\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188083eb413de003 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.364361731 +0000 UTC m=+0.262036563,LastTimestamp:2025-12-12 16:15:00.479681126 +0000 UTC m=+0.377355958,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.807531 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083eb5d580eb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.835839665 +0000 UTC m=+0.733514497,LastTimestamp:2025-12-12 16:15:00.835839665 +0000 UTC m=+0.733514497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.827273 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eb5d5989cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.835936716 +0000 UTC m=+0.733611548,LastTimestamp:2025-12-12 16:15:00.835936716 +0000 UTC m=+0.733611548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.836619 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083eb5df963bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.846412733 +0000 UTC m=+0.744087565,LastTimestamp:2025-12-12 16:15:00.846412733 +0000 UTC m=+0.744087565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.843988 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb5f1cd81e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.865513502 +0000 UTC m=+0.763188334,LastTimestamp:2025-12-12 16:15:00.865513502 +0000 UTC m=+0.763188334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.850580 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083eb5f3785b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:00.86726188 +0000 UTC m=+0.764936712,LastTimestamp:2025-12-12 16:15:00.86726188 +0000 UTC m=+0.764936712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.866462 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083eb7d9020e0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.376385248 +0000 UTC m=+1.274060080,LastTimestamp:2025-12-12 16:15:01.376385248 +0000 UTC m=+1.274060080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.871018 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083eb7d917b04 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.37647386 +0000 UTC m=+1.274148692,LastTimestamp:2025-12-12 16:15:01.37647386 +0000 UTC m=+1.274148692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.876044 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083eb7d92b52a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.376554282 +0000 UTC m=+1.274229114,LastTimestamp:2025-12-12 16:15:01.376554282 +0000 UTC m=+1.274229114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.882858 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eb7d938145 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.376606533 +0000 UTC m=+1.274281365,LastTimestamp:2025-12-12 16:15:01.376606533 +0000 UTC m=+1.274281365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.890418 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb7d948bd6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.376674774 +0000 UTC m=+1.274349606,LastTimestamp:2025-12-12 16:15:01.376674774 +0000 UTC m=+1.274349606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.895636 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb829635c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.460669892 +0000 UTC m=+1.358344724,LastTimestamp:2025-12-12 16:15:01.460669892 +0000 UTC m=+1.358344724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.902714 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb82a8758c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.461865868 +0000 UTC m=+1.359540700,LastTimestamp:2025-12-12 16:15:01.461865868 +0000 UTC m=+1.359540700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.909504 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083eb8498ec94 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.494402196 +0000 UTC m=+1.392077028,LastTimestamp:2025-12-12 16:15:01.494402196 +0000 UTC m=+1.392077028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.914786 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083eb8b8d2e83 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.611073155 +0000 UTC m=+1.508747987,LastTimestamp:2025-12-12 16:15:01.611073155 +0000 UTC m=+1.508747987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.921102 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083eb9a33e105 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.856878853 +0000 UTC m=+1.754553685,LastTimestamp:2025-12-12 16:15:01.856878853 +0000 UTC m=+1.754553685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.925709 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083eb9a381e01 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.857156609 +0000 UTC m=+1.754831441,LastTimestamp:2025-12-12 16:15:01.857156609 +0000 UTC m=+1.754831441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.931787 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb9a6e64a5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.860713637 +0000 UTC m=+1.758388469,LastTimestamp:2025-12-12 16:15:01.860713637 +0000 UTC m=+1.758388469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.940398 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb9af0aee3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.869252323 +0000 UTC m=+1.766927155,LastTimestamp:2025-12-12 16:15:01.869252323 +0000 UTC m=+1.766927155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.948906 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083eb9b02beb2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:01.870436018 +0000 UTC m=+1.768110840,LastTimestamp:2025-12-12 16:15:01.870436018 +0000 UTC m=+1.768110840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.954053 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ebb217bac2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.257687234 +0000 UTC m=+2.155362066,LastTimestamp:2025-12-12 16:15:02.257687234 +0000 UTC m=+2.155362066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.959286 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ebb2ab04ee openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.267340014 +0000 UTC m=+2.165014846,LastTimestamp:2025-12-12 16:15:02.267340014 +0000 UTC m=+2.165014846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.967322 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ebb2c1ebb7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.268840887 +0000 UTC m=+2.166515729,LastTimestamp:2025-12-12 16:15:02.268840887 +0000 UTC m=+2.166515729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.973691 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ebba113bcd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.391479245 +0000 UTC m=+2.289154087,LastTimestamp:2025-12-12 16:15:02.391479245 +0000 UTC m=+2.289154087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.980790 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ebba2ac7a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.393153442 +0000 UTC m=+2.290828274,LastTimestamp:2025-12-12 16:15:02.393153442 +0000 UTC m=+2.290828274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.986357 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebba9ac6b9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.400493241 +0000 UTC m=+2.298168073,LastTimestamp:2025-12-12 16:15:02.400493241 +0000 UTC m=+2.298168073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.991498 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebba9e80ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.400737517 +0000 UTC m=+2.298412349,LastTimestamp:2025-12-12 16:15:02.400737517 +0000 UTC m=+2.298412349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:19 crc kubenswrapper[5130]: E1212 16:15:19.996523 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ebc138383a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.511474746 +0000 UTC m=+2.409149578,LastTimestamp:2025-12-12 16:15:02.511474746 +0000 UTC m=+2.409149578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.003245 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083ebc572ed2c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.58243102 +0000 UTC m=+2.480105852,LastTimestamp:2025-12-12 16:15:02.58243102 +0000 UTC m=+2.480105852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.008378 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ebcd2d7d86 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.712098182 +0000 UTC m=+2.609773014,LastTimestamp:2025-12-12 16:15:02.712098182 +0000 UTC m=+2.609773014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.012683 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebcd3f85a3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.713279907 +0000 UTC m=+2.610954739,LastTimestamp:2025-12-12 16:15:02.713279907 +0000 UTC m=+2.610954739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.017378 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ebcd400cfe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.713314558 +0000 UTC m=+2.610989390,LastTimestamp:2025-12-12 16:15:02.713314558 +0000 UTC m=+2.610989390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.022259 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebcd432ec5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.713519813 +0000 UTC m=+2.611194645,LastTimestamp:2025-12-12 16:15:02.713519813 +0000 UTC m=+2.611194645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.026798 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188083ebcee9f68a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.741227146 +0000 UTC m=+2.638901978,LastTimestamp:2025-12-12 16:15:02.741227146 +0000 UTC m=+2.638901978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.033408 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ebcf11ec87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.743846023 +0000 UTC m=+2.641520855,LastTimestamp:2025-12-12 16:15:02.743846023 +0000 UTC m=+2.641520855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.037298 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebcf11ec55 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.743845973 +0000 UTC m=+2.641520805,LastTimestamp:2025-12-12 16:15:02.743845973 +0000 UTC m=+2.641520805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.040359 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebcf156393 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.744073107 +0000 UTC m=+2.641747939,LastTimestamp:2025-12-12 16:15:02.744073107 +0000 UTC m=+2.641747939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.045242 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebcf1fa790 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.744745872 +0000 UTC m=+2.642420704,LastTimestamp:2025-12-12 16:15:02.744745872 +0000 UTC m=+2.642420704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.050207 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebcf28f483 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.745355395 +0000 UTC m=+2.643030227,LastTimestamp:2025-12-12 16:15:02.745355395 +0000 UTC m=+2.643030227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.054712 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebdb90fae4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.953499364 +0000 UTC m=+2.851174196,LastTimestamp:2025-12-12 16:15:02.953499364 +0000 UTC m=+2.851174196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.059576 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebdc3cb67c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.964754044 +0000 UTC m=+2.862428876,LastTimestamp:2025-12-12 16:15:02.964754044 +0000 UTC m=+2.862428876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.064884 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebdc4eb71f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.965933855 +0000 UTC m=+2.863608687,LastTimestamp:2025-12-12 16:15:02.965933855 +0000 UTC m=+2.863608687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.069096 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebdc5a32bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.966686399 +0000 UTC m=+2.864361221,LastTimestamp:2025-12-12 16:15:02.966686399 +0000 UTC m=+2.864361221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.074629 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebde0112a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.994399911 +0000 UTC m=+2.892074753,LastTimestamp:2025-12-12 16:15:02.994399911 +0000 UTC m=+2.892074753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.079379 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ebde170ada openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:02.995839706 +0000 UTC m=+2.893514538,LastTimestamp:2025-12-12 16:15:02.995839706 +0000 UTC m=+2.893514538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.083206 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ebf33a472b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:03.350470443 +0000 UTC m=+3.248145275,LastTimestamp:2025-12-12 16:15:03.350470443 +0000 UTC m=+3.248145275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.088965 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ebf6d99912 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:03.411243282 +0000 UTC m=+3.308918114,LastTimestamp:2025-12-12 16:15:03.411243282 +0000 UTC m=+3.308918114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.094764 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec20b2bff8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.113340408 +0000 UTC m=+4.011015240,LastTimestamp:2025-12-12 16:15:04.113340408 +0000 UTC m=+4.011015240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.099454 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188083ec219f4319 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.128840473 +0000 UTC m=+4.026515305,LastTimestamp:2025-12-12 16:15:04.128840473 +0000 UTC m=+4.026515305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.105211 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec225218c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.140560581 +0000 UTC m=+4.038235423,LastTimestamp:2025-12-12 16:15:04.140560581 +0000 UTC m=+4.038235423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.112051 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec22b3659d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.146937245 +0000 UTC m=+4.044612077,LastTimestamp:2025-12-12 16:15:04.146937245 +0000 UTC m=+4.044612077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.116433 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec2e9db2ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.346841774 +0000 UTC m=+4.244516606,LastTimestamp:2025-12-12 16:15:04.346841774 +0000 UTC m=+4.244516606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.120712 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec2ebeb6dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.349005532 +0000 UTC m=+4.246680364,LastTimestamp:2025-12-12 16:15:04.349005532 +0000 UTC m=+4.246680364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.125033 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec2fa55677 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.364119671 +0000 UTC m=+4.261794513,LastTimestamp:2025-12-12 16:15:04.364119671 +0000 UTC m=+4.261794513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.129209 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec2fab1031 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.364494897 +0000 UTC m=+4.262169729,LastTimestamp:2025-12-12 16:15:04.364494897 +0000 UTC m=+4.262169729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.133228 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec2fc548bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.366213308 +0000 UTC m=+4.263888140,LastTimestamp:2025-12-12 16:15:04.366213308 +0000 UTC m=+4.263888140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.137750 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec33d41058 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.434290776 +0000 UTC m=+4.331965608,LastTimestamp:2025-12-12 16:15:04.434290776 +0000 UTC m=+4.331965608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.142383 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3c8f04d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.580760787 +0000 UTC m=+4.478435619,LastTimestamp:2025-12-12 16:15:04.580760787 +0000 UTC m=+4.478435619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.147532 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3d1c5810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.590022672 +0000 UTC m=+4.487697504,LastTimestamp:2025-12-12 16:15:04.590022672 +0000 UTC m=+4.487697504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.152736 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec410678ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.655698158 +0000 UTC m=+4.553373000,LastTimestamp:2025-12-12 16:15:04.655698158 +0000 UTC m=+4.553373000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.156850 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec41912a35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.664787509 +0000 UTC m=+4.562462341,LastTimestamp:2025-12-12 16:15:04.664787509 +0000 UTC m=+4.562462341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.160964 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec419fd956 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.665749846 +0000 UTC m=+4.563424678,LastTimestamp:2025-12-12 16:15:04.665749846 +0000 UTC m=+4.563424678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.165105 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec4c87492e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.848689454 +0000 UTC m=+4.746364286,LastTimestamp:2025-12-12 16:15:04.848689454 +0000 UTC m=+4.746364286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.169465 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec4de52699 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.871618201 +0000 UTC m=+4.769293043,LastTimestamp:2025-12-12 16:15:04.871618201 +0000 UTC m=+4.769293043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.173667 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec4df6fbe0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.872786912 +0000 UTC m=+4.770461754,LastTimestamp:2025-12-12 16:15:04.872786912 +0000 UTC m=+4.770461754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.178307 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec6dd21f4d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.407242061 +0000 UTC m=+5.304916893,LastTimestamp:2025-12-12 16:15:05.407242061 +0000 UTC m=+5.304916893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.182865 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec7592b75b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.537304411 +0000 UTC m=+5.434979243,LastTimestamp:2025-12-12 16:15:05.537304411 +0000 UTC m=+5.434979243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.190101 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec75a84d14 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.538718996 +0000 UTC m=+5.436393828,LastTimestamp:2025-12-12 16:15:05.538718996 +0000 UTC m=+5.436393828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.198442 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec8de1fd35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:05.945152821 +0000 UTC m=+5.842827653,LastTimestamp:2025-12-12 16:15:05.945152821 +0000 UTC m=+5.842827653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.202756 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec978016c4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.106508996 +0000 UTC m=+6.004183828,LastTimestamp:2025-12-12 16:15:06.106508996 +0000 UTC m=+6.004183828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.209290 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ec979817ea openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.108082154 +0000 UTC m=+6.005757026,LastTimestamp:2025-12-12 16:15:06.108082154 +0000 UTC m=+6.005757026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.214477 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ecb49aa1de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.594787806 +0000 UTC m=+6.492462638,LastTimestamp:2025-12-12 16:15:06.594787806 +0000 UTC m=+6.492462638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.220893 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188083ecc7b0f7e1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:06.915018721 +0000 UTC m=+6.812693563,LastTimestamp:2025-12-12 16:15:06.915018721 +0000 UTC m=+6.812693563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.227102 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-controller-manager-crc.188083edd606b8e4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 12 16:15:20 crc kubenswrapper[5130]: body: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.450487012 +0000 UTC m=+11.348161844,LastTimestamp:2025-12-12 16:15:11.450487012 +0000 UTC m=+11.348161844,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.231928 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188083edd6083d75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:11.450586485 +0000 UTC m=+11.348261327,LastTimestamp:2025-12-12 16:15:11.450586485 +0000 UTC m=+11.348261327,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.237105 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-apiserver-crc.188083ee9373cae1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 16:15:20 crc kubenswrapper[5130]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:20 crc kubenswrapper[5130]: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.628528865 +0000 UTC m=+14.526203697,LastTimestamp:2025-12-12 16:15:14.628528865 +0000 UTC m=+14.526203697,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.241834 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ee9374f48b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.628605067 +0000 UTC m=+14.526279899,LastTimestamp:2025-12-12 16:15:14.628605067 +0000 UTC m=+14.526279899,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.249732 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ee9373cae1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-apiserver-crc.188083ee9373cae1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 12 16:15:20 crc kubenswrapper[5130]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 12 16:15:20 crc kubenswrapper[5130]: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.628528865 +0000 UTC m=+14.526203697,LastTimestamp:2025-12-12 16:15:14.633480844 +0000 UTC m=+14.531155676,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.252528 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ee9374f48b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ee9374f48b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.628605067 +0000 UTC m=+14.526279899,LastTimestamp:2025-12-12 16:15:14.633552996 +0000 UTC m=+14.531227838,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.258553 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-apiserver-crc.188083ee96d86788 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 16:15:20 crc kubenswrapper[5130]: body: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.685454216 +0000 UTC m=+14.583129049,LastTimestamp:2025-12-12 16:15:14.685454216 +0000 UTC m=+14.583129049,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.264377 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ee96da5830 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:14.68558136 +0000 UTC m=+14.583256192,LastTimestamp:2025-12-12 16:15:14.68558136 +0000 UTC m=+14.583256192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.270089 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-apiserver-crc.188083ef2e0b58e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 16:15:20 crc kubenswrapper[5130]: body: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:17.222152418 +0000 UTC m=+17.119827250,LastTimestamp:2025-12-12 16:15:17.222152418 +0000 UTC m=+17.119827250,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.275416 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ef2e0cb4aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:17.22224145 +0000 UTC m=+17.119916282,LastTimestamp:2025-12-12 16:15:17.22224145 +0000 UTC m=+17.119916282,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.282511 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ef2e0b58e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 12 16:15:20 crc kubenswrapper[5130]: &Event{ObjectMeta:{kube-apiserver-crc.188083ef2e0b58e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 12 16:15:20 crc kubenswrapper[5130]: body: Dec 12 16:15:20 crc kubenswrapper[5130]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:17.222152418 +0000 UTC m=+17.119827250,LastTimestamp:2025-12-12 16:15:17.473262888 +0000 UTC m=+17.370937720,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 12 16:15:20 crc kubenswrapper[5130]: > Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.288914 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ef2e0cb4aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ef2e0cb4aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:17.22224145 +0000 UTC m=+17.119916282,LastTimestamp:2025-12-12 16:15:17.473323189 +0000 UTC m=+17.370998021,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.309104 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.458852 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.488600 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.488862 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.489826 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.489876 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.489889 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.490256 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:20 crc kubenswrapper[5130]: I1212 16:15:20.498252 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:15:20 crc kubenswrapper[5130]: E1212 16:15:20.501494 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.305514 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.485440 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.487337 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ff0c1a4be90d7170c4d1e101c972d2fff29828d4f3beebcba252b25fd5207de3" exitCode=255 Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.487462 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ff0c1a4be90d7170c4d1e101c972d2fff29828d4f3beebcba252b25fd5207de3"} Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.487708 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.487775 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488505 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488550 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488560 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488644 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488677 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.488694 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:21 crc kubenswrapper[5130]: E1212 16:15:21.488915 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:21 crc kubenswrapper[5130]: E1212 16:15:21.489275 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:21 crc kubenswrapper[5130]: I1212 16:15:21.489638 5130 scope.go:117] "RemoveContainer" containerID="ff0c1a4be90d7170c4d1e101c972d2fff29828d4f3beebcba252b25fd5207de3" Dec 12 16:15:21 crc kubenswrapper[5130]: E1212 16:15:21.496637 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec2fc548bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec2fc548bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.366213308 +0000 UTC m=+4.263888140,LastTimestamp:2025-12-12 16:15:21.491104812 +0000 UTC m=+21.388779644,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:21 crc kubenswrapper[5130]: E1212 16:15:21.711854 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec3c8f04d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3c8f04d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.580760787 +0000 UTC m=+4.478435619,LastTimestamp:2025-12-12 16:15:21.707445633 +0000 UTC m=+21.605120455,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:21 crc kubenswrapper[5130]: E1212 16:15:21.720867 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec3d1c5810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3d1c5810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.590022672 +0000 UTC m=+4.487697504,LastTimestamp:2025-12-12 16:15:21.716048321 +0000 UTC m=+21.613723153,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.304947 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.492343 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.494139 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166"} Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.494397 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.495041 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.495079 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:22 crc kubenswrapper[5130]: I1212 16:15:22.495089 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:22 crc kubenswrapper[5130]: E1212 16:15:22.495410 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:22 crc kubenswrapper[5130]: E1212 16:15:22.930494 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.313644 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.497917 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.498895 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.501481 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" exitCode=255 Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.501547 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166"} Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.501606 5130 scope.go:117] "RemoveContainer" containerID="ff0c1a4be90d7170c4d1e101c972d2fff29828d4f3beebcba252b25fd5207de3" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.501809 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.502324 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.502432 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.502519 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:23 crc kubenswrapper[5130]: E1212 16:15:23.502864 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:23 crc kubenswrapper[5130]: I1212 16:15:23.503602 5130 scope.go:117] "RemoveContainer" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" Dec 12 16:15:23 crc kubenswrapper[5130]: E1212 16:15:23.504046 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:23 crc kubenswrapper[5130]: E1212 16:15:23.508829 5130 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.306151 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.508591 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.684647 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.684931 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.686001 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.686115 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.686198 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:24 crc kubenswrapper[5130]: E1212 16:15:24.686646 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:24 crc kubenswrapper[5130]: I1212 16:15:24.686965 5130 scope.go:117] "RemoveContainer" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" Dec 12 16:15:24 crc kubenswrapper[5130]: E1212 16:15:24.687237 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:24 crc kubenswrapper[5130]: E1212 16:15:24.692223 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f0a478d482\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:24.687201728 +0000 UTC m=+24.584876550,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:25 crc kubenswrapper[5130]: I1212 16:15:25.306593 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.018302 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.019389 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.019504 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.019580 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.019657 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:26 crc kubenswrapper[5130]: E1212 16:15:26.028390 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:26 crc kubenswrapper[5130]: I1212 16:15:26.307107 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:27 crc kubenswrapper[5130]: I1212 16:15:27.306445 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:28 crc kubenswrapper[5130]: I1212 16:15:28.308165 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:29 crc kubenswrapper[5130]: I1212 16:15:29.305328 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:29 crc kubenswrapper[5130]: E1212 16:15:29.936383 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:30 crc kubenswrapper[5130]: E1212 16:15:30.159136 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:30 crc kubenswrapper[5130]: I1212 16:15:30.305920 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:30 crc kubenswrapper[5130]: E1212 16:15:30.459277 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:30 crc kubenswrapper[5130]: E1212 16:15:30.708611 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:30 crc kubenswrapper[5130]: E1212 16:15:30.894665 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:31 crc kubenswrapper[5130]: I1212 16:15:31.310804 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:32 crc kubenswrapper[5130]: E1212 16:15:32.219939 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.306309 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.495647 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.495899 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.496912 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.496944 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.496957 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:32 crc kubenswrapper[5130]: E1212 16:15:32.497459 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:32 crc kubenswrapper[5130]: I1212 16:15:32.497729 5130 scope.go:117] "RemoveContainer" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" Dec 12 16:15:32 crc kubenswrapper[5130]: E1212 16:15:32.497962 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:32 crc kubenswrapper[5130]: E1212 16:15:32.502297 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f0a478d482\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:32.497933098 +0000 UTC m=+32.395607930,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.029047 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.030544 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.030615 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.030630 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.030661 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:33 crc kubenswrapper[5130]: E1212 16:15:33.040314 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:33 crc kubenswrapper[5130]: I1212 16:15:33.306407 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:34 crc kubenswrapper[5130]: I1212 16:15:34.305692 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:35 crc kubenswrapper[5130]: I1212 16:15:35.312102 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:36 crc kubenswrapper[5130]: I1212 16:15:36.305291 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:36 crc kubenswrapper[5130]: E1212 16:15:36.945245 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:37 crc kubenswrapper[5130]: I1212 16:15:37.307512 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:38 crc kubenswrapper[5130]: I1212 16:15:38.308033 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:39 crc kubenswrapper[5130]: I1212 16:15:39.306078 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.040898 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.041817 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.041847 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.041857 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.041876 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:40 crc kubenswrapper[5130]: E1212 16:15:40.052320 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:40 crc kubenswrapper[5130]: I1212 16:15:40.310385 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:40 crc kubenswrapper[5130]: E1212 16:15:40.459851 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:41 crc kubenswrapper[5130]: I1212 16:15:41.305721 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:42 crc kubenswrapper[5130]: I1212 16:15:42.310794 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.307594 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.369473 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.370533 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.370569 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.370579 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:43 crc kubenswrapper[5130]: E1212 16:15:43.370870 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:43 crc kubenswrapper[5130]: I1212 16:15:43.371129 5130 scope.go:117] "RemoveContainer" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" Dec 12 16:15:43 crc kubenswrapper[5130]: E1212 16:15:43.381015 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec2fc548bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec2fc548bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.366213308 +0000 UTC m=+4.263888140,LastTimestamp:2025-12-12 16:15:43.377015534 +0000 UTC m=+43.274690366,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:43 crc kubenswrapper[5130]: E1212 16:15:43.618034 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec3c8f04d3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3c8f04d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.580760787 +0000 UTC m=+4.478435619,LastTimestamp:2025-12-12 16:15:43.609453484 +0000 UTC m=+43.507128336,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:43 crc kubenswrapper[5130]: E1212 16:15:43.627363 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083ec3d1c5810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083ec3d1c5810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:04.590022672 +0000 UTC m=+4.487697504,LastTimestamp:2025-12-12 16:15:43.621775841 +0000 UTC m=+43.519450693,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:43 crc kubenswrapper[5130]: E1212 16:15:43.953605 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:44 crc kubenswrapper[5130]: E1212 16:15:44.265549 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.308275 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.565735 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.569339 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00"} Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.569938 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.571611 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.571672 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:44 crc kubenswrapper[5130]: I1212 16:15:44.571687 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:44 crc kubenswrapper[5130]: E1212 16:15:44.572088 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.305532 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.572875 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.573338 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.574676 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" exitCode=255 Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.574711 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00"} Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.574771 5130 scope.go:117] "RemoveContainer" containerID="b0f2ba2f09f4574bf3c7fb27cac8da6598e86df49ff2bcced1ebb97a20643166" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.574961 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.575507 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.575542 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.575555 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:45 crc kubenswrapper[5130]: E1212 16:15:45.575876 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:45 crc kubenswrapper[5130]: I1212 16:15:45.576115 5130 scope.go:117] "RemoveContainer" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" Dec 12 16:15:45 crc kubenswrapper[5130]: E1212 16:15:45.576308 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:45 crc kubenswrapper[5130]: E1212 16:15:45.581661 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f0a478d482\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:45.576279886 +0000 UTC m=+45.473954718,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:46 crc kubenswrapper[5130]: I1212 16:15:46.305732 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:46 crc kubenswrapper[5130]: I1212 16:15:46.578661 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.053455 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.054410 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.054448 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.054461 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.054504 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:47 crc kubenswrapper[5130]: E1212 16:15:47.069043 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:47 crc kubenswrapper[5130]: I1212 16:15:47.309263 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:48 crc kubenswrapper[5130]: I1212 16:15:48.309207 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:49 crc kubenswrapper[5130]: I1212 16:15:49.308487 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:50 crc kubenswrapper[5130]: I1212 16:15:50.307726 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:50 crc kubenswrapper[5130]: E1212 16:15:50.460809 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:15:50 crc kubenswrapper[5130]: E1212 16:15:50.959617 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:51 crc kubenswrapper[5130]: I1212 16:15:51.308397 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:51 crc kubenswrapper[5130]: E1212 16:15:51.799523 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 16:15:52 crc kubenswrapper[5130]: I1212 16:15:52.305956 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:53 crc kubenswrapper[5130]: E1212 16:15:53.030122 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 16:15:53 crc kubenswrapper[5130]: I1212 16:15:53.307148 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.069699 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.071506 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.071598 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.071617 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.071660 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.090581 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.307656 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.571112 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.571703 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.573004 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.573065 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.573081 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.573515 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.573800 5130 scope.go:117] "RemoveContainer" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.574061 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.580740 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f0a478d482\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:54.57402737 +0000 UTC m=+54.471702202,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.684928 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.685231 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.686051 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.686097 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.686110 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.686476 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:54 crc kubenswrapper[5130]: I1212 16:15:54.686699 5130 scope.go:117] "RemoveContainer" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.686921 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:15:54 crc kubenswrapper[5130]: E1212 16:15:54.692087 5130 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188083f0a478d482\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188083f0a478d482 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:15:23.504006274 +0000 UTC m=+23.401681106,LastTimestamp:2025-12-12 16:15:54.686888189 +0000 UTC m=+54.584563011,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.303356 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.443892 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.444353 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.445771 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.445834 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:15:55 crc kubenswrapper[5130]: I1212 16:15:55.445847 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:15:55 crc kubenswrapper[5130]: E1212 16:15:55.446279 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:15:56 crc kubenswrapper[5130]: I1212 16:15:56.311556 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:57 crc kubenswrapper[5130]: I1212 16:15:57.309524 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:57 crc kubenswrapper[5130]: E1212 16:15:57.776910 5130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 16:15:57 crc kubenswrapper[5130]: E1212 16:15:57.965888 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:15:58 crc kubenswrapper[5130]: I1212 16:15:58.308524 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:15:59 crc kubenswrapper[5130]: I1212 16:15:59.307947 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:00 crc kubenswrapper[5130]: I1212 16:16:00.309088 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:00 crc kubenswrapper[5130]: E1212 16:16:00.462168 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.090780 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.092020 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.092057 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.092069 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.092093 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:16:01 crc kubenswrapper[5130]: E1212 16:16:01.100048 5130 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 12 16:16:01 crc kubenswrapper[5130]: I1212 16:16:01.312102 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:02 crc kubenswrapper[5130]: I1212 16:16:02.307968 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:03 crc kubenswrapper[5130]: I1212 16:16:03.309689 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:04 crc kubenswrapper[5130]: I1212 16:16:04.306622 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:04 crc kubenswrapper[5130]: E1212 16:16:04.972553 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 12 16:16:05 crc kubenswrapper[5130]: I1212 16:16:05.306887 5130 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 12 16:16:05 crc kubenswrapper[5130]: I1212 16:16:05.491858 5130 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-db949" Dec 12 16:16:05 crc kubenswrapper[5130]: I1212 16:16:05.498487 5130 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-db949" Dec 12 16:16:05 crc kubenswrapper[5130]: I1212 16:16:05.571262 5130 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.231117 5130 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.369367 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.370244 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.370289 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.370299 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:06 crc kubenswrapper[5130]: E1212 16:16:06.370748 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.370969 5130 scope.go:117] "RemoveContainer" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.499992 5130 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-11 16:11:05 +0000 UTC" deadline="2026-01-06 08:03:22.726508824 +0000 UTC" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.500096 5130 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="591h47m16.226418239s" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.637558 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.639894 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca"} Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.640152 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.640838 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.640901 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:06 crc kubenswrapper[5130]: I1212 16:16:06.640915 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:06 crc kubenswrapper[5130]: E1212 16:16:06.641574 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.101136 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.102823 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.102881 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.102904 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.103083 5130 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.114949 5130 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.115301 5130 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.115327 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.119271 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.119305 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.119321 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.119338 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.119354 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:08Z","lastTransitionTime":"2025-12-12T16:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.133931 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.141512 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.141545 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.141558 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.141577 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.141587 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:08Z","lastTransitionTime":"2025-12-12T16:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.151892 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.159699 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.159763 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.159779 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.159795 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.159809 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:08Z","lastTransitionTime":"2025-12-12T16:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.170036 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.176590 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.176618 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.176631 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.176647 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.176659 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:08Z","lastTransitionTime":"2025-12-12T16:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.187058 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.187250 5130 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.187286 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.287632 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.388336 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.488616 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.589660 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.647615 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.648209 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.650439 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" exitCode=255 Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.650549 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca"} Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.650658 5130 scope.go:117] "RemoveContainer" containerID="955019fff79f930017acd2b15c57220c1b096b7f3ecf8b903fa90997c2ef4c00" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.650863 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.651503 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.651544 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.651557 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.652014 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:08 crc kubenswrapper[5130]: I1212 16:16:08.652293 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.652492 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.690353 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.791434 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.891578 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:08 crc kubenswrapper[5130]: E1212 16:16:08.992845 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.093048 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.193498 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.294157 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.395215 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.495565 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.596577 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: I1212 16:16:09.655367 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.697693 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.798457 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:09 crc kubenswrapper[5130]: E1212 16:16:09.899642 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.000466 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.100846 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.201677 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.302553 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.403339 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.463519 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.504649 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.605387 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.705958 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.806772 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:10 crc kubenswrapper[5130]: E1212 16:16:10.907584 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.007838 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.109059 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.209947 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.310448 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.410721 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.510942 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.611609 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.712376 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.813306 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:11 crc kubenswrapper[5130]: E1212 16:16:11.914257 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.014868 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.115784 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.216007 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.316147 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.416518 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.517717 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.618386 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.719009 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.819396 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:12 crc kubenswrapper[5130]: E1212 16:16:12.920152 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.020711 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.121651 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.221798 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.322854 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.423316 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.524420 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.625376 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.726079 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.826441 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:13 crc kubenswrapper[5130]: E1212 16:16:13.927365 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.028574 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.128707 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.229062 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.329872 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.430845 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.531388 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.631847 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.684577 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.685059 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.686786 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.686831 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.686841 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.687272 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:14 crc kubenswrapper[5130]: I1212 16:16:14.687510 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.687776 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.732992 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.833993 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:14 crc kubenswrapper[5130]: E1212 16:16:14.934660 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.035064 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.136114 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.236921 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.337442 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.437949 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.538722 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.639741 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.740440 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.840795 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:15 crc kubenswrapper[5130]: E1212 16:16:15.941436 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.042515 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.143227 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.243511 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.344249 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.444636 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.545626 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.640862 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.641257 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.642441 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.642501 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.642518 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.643425 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:16 crc kubenswrapper[5130]: I1212 16:16:16.643808 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.644940 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.646728 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.747536 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.848704 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:16 crc kubenswrapper[5130]: E1212 16:16:16.949644 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.050149 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.150936 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.251742 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.352820 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: I1212 16:16:17.369873 5130 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 12 16:16:17 crc kubenswrapper[5130]: I1212 16:16:17.371149 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:17 crc kubenswrapper[5130]: I1212 16:16:17.371244 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:17 crc kubenswrapper[5130]: I1212 16:16:17.371263 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.371752 5130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.453762 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.554875 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.655467 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.756630 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.857293 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:17 crc kubenswrapper[5130]: E1212 16:16:17.957747 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.058915 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.159994 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.260719 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.361349 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.444161 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.449103 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.449447 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.449544 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.449666 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.449777 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.460946 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.468311 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.468379 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.468393 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.468407 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.468418 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.480559 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.490487 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.490540 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.490553 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.490571 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.490582 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.503776 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.511116 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.511204 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.511224 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.511240 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:18 crc kubenswrapper[5130]: I1212 16:16:18.511253 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:18Z","lastTransitionTime":"2025-12-12T16:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.523095 5130 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e5f274e5-0ab6-408e-b0cf-1af5b029b864\\\",\\\"systemUUID\\\":\\\"6868605f-6684-4979-9a48-308ed352f6d0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.523237 5130 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.523266 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.624250 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.724760 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.825259 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:18 crc kubenswrapper[5130]: E1212 16:16:18.925447 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.025754 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.126914 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.227087 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.328326 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.428496 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.529222 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.630319 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.731379 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.832385 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:19 crc kubenswrapper[5130]: E1212 16:16:19.933516 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.033797 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.134968 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.235706 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.335977 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: I1212 16:16:20.378749 5130 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.436353 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.464748 5130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.536823 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.638466 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.738720 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.839279 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:20 crc kubenswrapper[5130]: E1212 16:16:20.940706 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.040946 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.142053 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.243528 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.344526 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.445958 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: E1212 16:16:21.546946 5130 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.549205 5130 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.617740 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.628732 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.645172 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.649549 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.649618 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.649638 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.649667 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.649687 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.743555 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.752303 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.752362 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.752375 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.752394 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.752407 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.843948 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.855254 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.855314 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.855325 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.855344 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.855357 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.958068 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.958621 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.958728 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.958839 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:21 crc kubenswrapper[5130]: I1212 16:16:21.958950 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:21Z","lastTransitionTime":"2025-12-12T16:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.062457 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.062520 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.062536 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.062563 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.062583 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.165021 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.165088 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.165103 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.165132 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.165144 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.267929 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.268000 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.268013 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.268031 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.268044 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.339527 5130 apiserver.go:52] "Watching apiserver" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.353252 5130 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.354256 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qwg8p","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-multus/multus-additional-cni-plugins-mqfd8","openshift-multus/network-metrics-daemon-jhhcn","openshift-network-diagnostics/network-check-target-fhkjl","openshift-dns/node-resolver-tddhh","openshift-image-registry/node-ca-2xpcq","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-rzhgf","openshift-ovn-kubernetes/ovnkube-node-wjw4g","openshift-etcd/etcd-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.356007 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.356344 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.356538 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.356650 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.356726 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.357478 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.358665 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.360221 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.360309 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.360637 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.362632 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.362876 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.363155 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.363174 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.363620 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.363709 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.369991 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.370049 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.370061 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.370081 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.370094 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.376914 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.377054 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.377135 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.379780 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.380501 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.380649 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.380873 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.383275 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.385087 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.385866 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.386277 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.386292 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.386585 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.386946 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.390440 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.390641 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.394212 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.397788 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.397821 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.398018 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.398399 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.398566 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.400259 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401021 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401105 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401780 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401889 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401915 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.401982 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.403381 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.404087 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.404269 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.406066 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.407325 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.407524 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.408167 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.408197 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.409289 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.409299 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.414158 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.418734 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.420733 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.423102 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.423549 5130 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.423731 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.426218 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.434861 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.450932 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.464297 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.472160 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.472239 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.472251 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.472269 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.472280 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.476847 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480343 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480538 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480577 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480611 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480646 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480677 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480751 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480785 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480817 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480851 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480884 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480914 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480939 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480968 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.480995 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481042 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481065 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481095 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481120 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481145 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481147 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481170 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481266 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481290 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481315 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481335 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481386 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481411 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481431 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481451 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481474 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481497 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481518 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481535 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481555 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481572 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481601 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481576 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481629 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481665 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481695 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481674 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481724 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481858 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481899 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481939 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.481972 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482007 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482040 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482073 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482100 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482135 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482161 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482208 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482285 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482316 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482348 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482371 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482395 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482416 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482438 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482463 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482488 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482514 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482523 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482539 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482572 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482602 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482651 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482670 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.482897 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.483243 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.483263 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.483336 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.483700 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.483779 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484161 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484195 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484429 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484796 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484874 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484912 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484949 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484978 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485004 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485028 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484721 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487245 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.484869 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485175 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485348 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485417 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485533 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485513 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485766 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485805 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.485823 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.486058 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.486639 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487562 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.486821 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487113 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487332 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487065 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487529 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487920 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.487909 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488128 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488144 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488145 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488218 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488257 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488287 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488353 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488414 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488461 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488505 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488760 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488799 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488829 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488855 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488348 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488876 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488901 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488923 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488945 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488964 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.488985 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489007 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489030 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489056 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489078 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489105 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489136 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489517 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tddhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72dbaca9-d010-46f5-a645-d2713a98f846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tddhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.489728 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490075 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490153 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490423 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490529 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490565 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490766 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.490962 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.491957 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.491990 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.492170 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.492315 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.492434 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.492463 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.492599 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.493675 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.493789 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494533 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494827 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494822 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494919 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494959 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.494987 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495014 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495047 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495114 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495241 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495271 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495299 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495326 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495354 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495381 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495409 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495433 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495462 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495490 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495521 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495549 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495575 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495600 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495622 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495644 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495667 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495690 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495716 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495744 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495778 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495781 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495809 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495838 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495864 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495893 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495919 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495934 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.495945 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496173 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496237 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496265 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496307 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496301 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496336 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496363 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496607 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496657 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496713 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.496844 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497197 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497348 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497364 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497408 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497416 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497456 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497482 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497508 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497537 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497565 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497597 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497810 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497842 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497870 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497898 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497929 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497960 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497988 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498019 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498059 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498086 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498110 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498136 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498165 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498210 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498239 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498268 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498295 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498321 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498345 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498372 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498397 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498423 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498449 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498476 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498506 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498618 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498648 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498674 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498703 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498741 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498776 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498803 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498826 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498851 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498875 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498900 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498925 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498957 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498983 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499011 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499051 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499076 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499101 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499128 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499289 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499322 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499348 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499416 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499444 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499473 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499504 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499532 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499562 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499592 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499612 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499639 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499665 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499685 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499711 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499736 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499760 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499786 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499818 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500345 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500399 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500432 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500457 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500481 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500502 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500530 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500558 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500578 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500597 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500618 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500641 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500663 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500691 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500723 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500754 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500775 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500800 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500829 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500922 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500960 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500995 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-os-release\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501030 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501053 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501077 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cnibin\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501097 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501117 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-cni-binary-copy\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501139 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501159 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501205 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501224 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/72dbaca9-d010-46f5-a645-d2713a98f846-hosts-file\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501246 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-bin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501265 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-kubelet\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501287 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501308 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501331 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501354 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-os-release\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501379 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501397 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-rootfs\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501423 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ab3d3198-2798-4180-aa5a-a0e495348125-serviceca\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501444 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gtx\" (UniqueName: \"kubernetes.io/projected/ab3d3198-2798-4180-aa5a-a0e495348125-kube-api-access-v4gtx\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501464 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5sn6\" (UniqueName: \"kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501488 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501517 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501540 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501565 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501587 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501608 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-system-cni-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501628 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-multus\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501650 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-multus-certs\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501674 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501699 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5qz\" (UniqueName: \"kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501728 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501761 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-proxy-tls\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501786 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501815 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/72dbaca9-d010-46f5-a645-d2713a98f846-tmp-dir\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501840 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlbx\" (UniqueName: \"kubernetes.io/projected/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-kube-api-access-jdlbx\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501861 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-k8s-cni-cncf-io\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501886 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvjr\" (UniqueName: \"kubernetes.io/projected/6625166c-6688-498a-81c5-89ec476edef2-kube-api-access-qrvjr\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501915 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501944 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501966 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501990 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502020 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502041 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pct\" (UniqueName: \"kubernetes.io/projected/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-kube-api-access-c8pct\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502068 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbf5\" (UniqueName: \"kubernetes.io/projected/72dbaca9-d010-46f5-a645-d2713a98f846-kube-api-access-7hbf5\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502094 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-hostroot\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502118 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-multus-daemon-config\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502136 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-etc-kubernetes\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502152 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502173 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab3d3198-2798-4180-aa5a-a0e495348125-host\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502232 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502258 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502281 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502302 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502321 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-system-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502346 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502366 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502401 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-conf-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502420 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502438 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502468 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502519 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-cnibin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502540 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-socket-dir-parent\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502589 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502614 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503557 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503631 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-netns\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503683 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503712 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503736 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503763 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503792 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvtb\" (UniqueName: \"kubernetes.io/projected/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-kube-api-access-bsvtb\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503820 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503924 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503942 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503957 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503970 5130 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503984 5130 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503996 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504009 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504020 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504031 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504042 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504053 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504065 5130 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504081 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504097 5130 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504112 5130 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504128 5130 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504148 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504164 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504198 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504215 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504232 5130 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497607 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497762 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.497788 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.504329 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498125 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498388 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498564 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.498506 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499133 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499253 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499769 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.499835 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500010 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500106 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500271 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500310 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500732 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500732 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500758 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.500848 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501225 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501315 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501563 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501576 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501811 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.501977 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502162 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502272 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.502783 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503116 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503328 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503504 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503534 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503563 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503685 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503759 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503150 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503902 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.503922 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504129 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504255 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504567 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504752 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504770 5130 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504780 5130 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504792 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504938 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.504945 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505112 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505103 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505146 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505308 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505375 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505398 5130 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505425 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505444 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505566 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.505673 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506316 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506064 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506465 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506249 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506794 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506803 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.506977 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507102 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507253 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507367 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507512 5130 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507645 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507713 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507857 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.507862 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.508032 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.008009212 +0000 UTC m=+82.905684044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508057 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508108 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508125 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508155 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508331 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508908 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.508972 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.509092 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.509235 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.509334 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.509347 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.510356 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.010306938 +0000 UTC m=+82.907981770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.510375 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.510500 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.510508 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511006 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511155 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511316 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511346 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511430 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511479 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511715 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.511853 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512211 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512252 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512290 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512426 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512660 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512687 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512859 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.512985 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513056 5130 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513317 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513355 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513384 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513389 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513403 5130 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513424 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513443 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513465 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513490 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513562 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513606 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513636 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513660 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513701 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.513910 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.514112 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.514128 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.514867 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.514931 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.014243054 +0000 UTC m=+82.911917886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.515122 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.515261 5130 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.515369 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516049 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516159 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516201 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516257 5130 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516310 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516336 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516366 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516385 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516403 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516423 5130 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516444 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516462 5130 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516482 5130 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516501 5130 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516519 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516536 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516553 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516571 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516587 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516607 5130 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.516879 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59dee4dd-783d-48e5-a99b-f97c32b138ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0463b42147135b47f3c30fe91c661361f17208126f55c37f09085757f90b532a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5b81097d5d72c60e20547fc181ec92d257861d0f03b88a77e50c0fd85c18f9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dc99e3b4e8b1b5293eb3c7edfb74313dd878dcbeaab60b6531bad449253b5c8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b49b27e73f8ffadb6c3f2ba01c6f20a9811a2d2e1bef1e979564e97d5211098c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b6cca39248203092b7da19e1fbdcc41fccbf547de52c3cc91dc227aa12b200f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0dc94b247ff9bd0f3ef35dd695c1cf1c1e827d31b340b6cf9a365bb0fd9a4d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc94b247ff9bd0f3ef35dd695c1cf1c1e827d31b340b6cf9a365bb0fd9a4d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e737ef97141c311fc520d5548b5d0fa0f9791a5a26bb75389c9ba72e210d5ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e737ef97141c311fc520d5548b5d0fa0f9791a5a26bb75389c9ba72e210d5ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://05cf752c30fead512f852bc577d4a8c2151f410f239c8098b1da8a7f204f94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05cf752c30fead512f852bc577d4a8c2151f410f239c8098b1da8a7f204f94f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.517551 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.520203 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.523516 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.523651 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.523929 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.523982 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.524001 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.524126 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.024094035 +0000 UTC m=+82.921768867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.527820 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.527851 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.527866 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.527965 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.027939769 +0000 UTC m=+82.925614601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.528824 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.530688 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.531412 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.531444 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.531751 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.534335 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.535623 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.535702 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.535972 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536065 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536092 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536067 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"603ad635-456e-4bd9-9aba-9f5882cf0440\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f1a01912ddee091b284981f73500faf3fcfd7a1071596baf5cd12e42fadf2802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3f84b80c2f32e68a8eb79916fece466ce160a92d4d9b989d1bfd37673b951c48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://818cbab9fa2109ab2203469a2d7999f6b39f7f70722424aa9e78038d779eb741\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-12T16:16:08Z\\\",\\\"message\\\":\\\"ar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsPreferCBOR\\\\\\\" enabled=false\\\\nW1212 16:16:07.522604 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1212 16:16:07.522767 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1212 16:16:07.524035 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3073921847/tls.crt::/tmp/serving-cert-3073921847/tls.key\\\\\\\"\\\\nI1212 16:16:07.999788 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1212 16:16:08.001984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1212 16:16:08.001998 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1212 16:16:08.002024 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1212 16:16:08.002030 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1212 16:16:08.028279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1212 16:16:08.028351 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:16:08.028357 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1212 16:16:08.028361 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1212 16:16:08.028365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1212 16:16:08.028368 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1212 16:16:08.028371 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1212 16:16:08.028922 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1212 16:16:08.037450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-12T16:16:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://96c12daa01120f19be833f82d5f8c18b27d7dc4c74ac5543dd248efa1a9301d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dba3c0695675d41f391363533d51f6311cd8233a6619881a3913b8726c0f824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dba3c0695675d41f391363533d51f6311cd8233a6619881a3913b8726c0f824\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536263 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536604 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.536678 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.537633 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.538160 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.538530 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.538588 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.538789 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539041 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539269 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539349 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539356 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539462 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539664 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.539877 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.540013 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.540083 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.540429 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.540507 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542041 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542122 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542047 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542251 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542270 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542354 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542415 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.542818 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.543102 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.543220 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.543880 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.543957 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.543998 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.544445 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.544479 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.544525 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.544958 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.547428 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.547672 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.548216 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.548622 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.549959 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.550163 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4f6a8ed-4732-4f68-b5db-1b1d424f77e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e1386908a4c2cb67a9fce9909e6c1675ffbcbdb90691dc350e3ed7bd7afa8fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6854378cf805fce7df5492be1ae30bc52f8aca905045464aa4db144246de92d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4d07f13a9fe0e83c34c0425a9e6d5caa0ed5d9ef8b200e2b45b1aa3882ee3cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac19827bea2108e590e57983557d3f9158fd935eedbff3452bfcf0437e6d4ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac19827bea2108e590e57983557d3f9158fd935eedbff3452bfcf0437e6d4ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.550873 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.551742 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.553821 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.554445 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.554702 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.555344 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.555412 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.560112 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.564879 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.572016 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.576144 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.576223 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.576239 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.576261 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.576276 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.579339 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.583222 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.586873 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.589773 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jhhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsvtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsvtb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jhhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.604471 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rzhgf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6625166c-6688-498a-81c5-89ec476edef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrvjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rzhgf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617193 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93aaac8c-bbe8-4744-9151-f486341fc9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5sn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5sn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xtrkr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617518 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-cnibin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617574 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-socket-dir-parent\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617621 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617647 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617667 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617691 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-netns\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617711 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-cnibin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617719 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617761 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617795 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617814 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617819 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617844 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bsvtb\" (UniqueName: \"kubernetes.io/projected/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-kube-api-access-bsvtb\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617880 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617915 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617942 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617968 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-os-release\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.617991 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618014 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cnibin\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618045 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cnibin\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618091 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-socket-dir-parent\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618119 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618161 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618213 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618407 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618422 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618496 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618526 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-netns\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618568 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-cni-binary-copy\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.618468 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618637 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618666 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618697 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618719 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/72dbaca9-d010-46f5-a645-d2713a98f846-hosts-file\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618737 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-bin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618763 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-bin\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: E1212 16:16:22.618775 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:23.118735726 +0000 UTC m=+83.016410768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.618793 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619270 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-kubelet\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619319 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619347 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-os-release\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619407 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-rootfs\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619435 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ab3d3198-2798-4180-aa5a-a0e495348125-serviceca\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619458 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gtx\" (UniqueName: \"kubernetes.io/projected/ab3d3198-2798-4180-aa5a-a0e495348125-kube-api-access-v4gtx\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619479 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-mcd-auth-proxy-config\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619490 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5sn6\" (UniqueName: \"kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619492 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-os-release\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619555 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619580 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-binary-copy\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619638 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-os-release\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619646 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619688 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-kubelet\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619708 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-system-cni-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619708 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619736 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619739 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-multus\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619769 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-var-lib-cni-multus\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619777 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-multus-certs\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619801 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619826 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5qz\" (UniqueName: \"kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619828 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619846 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619833 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-system-cni-dir\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619874 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-proxy-tls\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619896 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619899 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619927 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/72dbaca9-d010-46f5-a645-d2713a98f846-tmp-dir\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619950 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlbx\" (UniqueName: \"kubernetes.io/projected/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-kube-api-access-jdlbx\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619974 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-k8s-cni-cncf-io\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.619997 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvjr\" (UniqueName: \"kubernetes.io/projected/6625166c-6688-498a-81c5-89ec476edef2-kube-api-access-qrvjr\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620023 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620048 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620073 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620095 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pct\" (UniqueName: \"kubernetes.io/projected/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-kube-api-access-c8pct\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620115 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbf5\" (UniqueName: \"kubernetes.io/projected/72dbaca9-d010-46f5-a645-d2713a98f846-kube-api-access-7hbf5\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620131 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-hostroot\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620150 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-multus-daemon-config\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620167 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-etc-kubernetes\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620199 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620218 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab3d3198-2798-4180-aa5a-a0e495348125-host\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620235 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620593 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-rootfs\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620640 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-multus-certs\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620683 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620721 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620758 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-cni-binary-copy\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620835 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620762 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-host-run-k8s-cni-cncf-io\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620758 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.620940 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/72dbaca9-d010-46f5-a645-d2713a98f846-hosts-file\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621414 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-hostroot\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621463 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621484 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621501 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-etc-kubernetes\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621535 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6625166c-6688-498a-81c5-89ec476edef2-multus-daemon-config\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.621551 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab3d3198-2798-4180-aa5a-a0e495348125-host\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622049 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/72dbaca9-d010-46f5-a645-d2713a98f846-tmp-dir\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622153 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622642 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622759 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622847 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622922 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.623962 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-system-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624092 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622934 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.623780 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.623801 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ab3d3198-2798-4180-aa5a-a0e495348125-serviceca\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624155 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-system-cni-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622969 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.622999 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624548 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-proxy-tls\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624625 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-conf-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624674 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624716 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6625166c-6688-498a-81c5-89ec476edef2-multus-conf-dir\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624716 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624826 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624947 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624974 5130 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.624988 5130 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625001 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625010 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625015 5130 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625069 5130 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625086 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625101 5130 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625117 5130 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625132 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625147 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625163 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625195 5130 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625210 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625227 5130 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625241 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625256 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625271 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625287 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625302 5130 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625340 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625358 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625804 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625873 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.625940 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626005 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626066 5130 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626121 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626193 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626262 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626327 5130 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626386 5130 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626443 5130 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626499 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626569 5130 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626648 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626707 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626716 5130 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626769 5130 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626783 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626794 5130 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626804 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626815 5130 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626826 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626837 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626847 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626857 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626868 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626878 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626888 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626898 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626910 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626919 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626930 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626940 5130 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626953 5130 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626963 5130 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626974 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626983 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.626994 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627004 5130 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627013 5130 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627027 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627038 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627048 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627058 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627069 5130 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627080 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627090 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627103 5130 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627113 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627122 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627133 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627143 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627154 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627165 5130 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627237 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627255 5130 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627268 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627278 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627295 5130 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627308 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627323 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627335 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627347 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627401 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627411 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627420 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627430 5130 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627444 5130 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627455 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627465 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627475 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627488 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627497 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627507 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627518 5130 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627528 5130 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627537 5130 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627549 5130 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627558 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627570 5130 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627579 5130 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627589 5130 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627599 5130 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627609 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627618 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627628 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627640 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627650 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627658 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627668 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627678 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627688 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627699 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627709 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627719 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627728 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627737 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627748 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627758 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627772 5130 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627786 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627801 5130 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627814 5130 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627827 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627840 5130 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627852 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627866 5130 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627880 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627894 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627905 5130 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627918 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627931 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627945 5130 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627956 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627967 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627977 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627988 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.627998 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628010 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628020 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628030 5130 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628039 5130 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628048 5130 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628057 5130 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628066 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628076 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628087 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628097 5130 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628109 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628118 5130 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628128 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.628138 5130 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.633051 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb541cc4-2231-483b-905a-33117bdc53dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f4b79b1b2b016b2f05ce1eb552b7f562fe2b200053382577073b9746227781c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d06abcd904ffab6d0e7ef275a88cc4d48ca01cbaf45c12b67e4ce3961c69e34f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad212f911518bdcc0e46bbe51292c8675d6eaf9ff02549547d8c35be6da8a3d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b218d02334c8f81397f2f6b9c264419d6ec78b17441587c786444979c8fd4db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.644263 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gtx\" (UniqueName: \"kubernetes.io/projected/ab3d3198-2798-4180-aa5a-a0e495348125-kube-api-access-v4gtx\") pod \"node-ca-2xpcq\" (UID: \"ab3d3198-2798-4180-aa5a-a0e495348125\") " pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.644779 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvjr\" (UniqueName: \"kubernetes.io/projected/6625166c-6688-498a-81c5-89ec476edef2-kube-api-access-qrvjr\") pod \"multus-rzhgf\" (UID: \"6625166c-6688-498a-81c5-89ec476edef2\") " pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.644968 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5sn6\" (UniqueName: \"kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6\") pod \"ovnkube-control-plane-57b78d8988-xtrkr\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.646528 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.647843 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlbx\" (UniqueName: \"kubernetes.io/projected/fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75-kube-api-access-jdlbx\") pod \"multus-additional-cni-plugins-mqfd8\" (UID: \"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\") " pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.650028 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pct\" (UniqueName: \"kubernetes.io/projected/5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e-kube-api-access-c8pct\") pod \"machine-config-daemon-qwg8p\" (UID: \"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\") " pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.650088 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5qz\" (UniqueName: \"kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz\") pod \"ovnkube-node-wjw4g\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.651315 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsvtb\" (UniqueName: \"kubernetes.io/projected/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-kube-api-access-bsvtb\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.651491 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbf5\" (UniqueName: \"kubernetes.io/projected/72dbaca9-d010-46f5-a645-d2713a98f846-kube-api-access-7hbf5\") pod \"node-resolver-tddhh\" (UID: \"72dbaca9-d010-46f5-a645-d2713a98f846\") " pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.658727 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.671734 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.672831 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.677782 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.678024 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.678215 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.678230 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.678388 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.678407 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.684376 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tddhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72dbaca9-d010-46f5-a645-d2713a98f846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tddhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.689126 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.691679 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-82326151015e9fe2de0bfc9eaabc8eb4078e73845570a27d4d814146bdc63ead WatchSource:0}: Error finding container 82326151015e9fe2de0bfc9eaabc8eb4078e73845570a27d4d814146bdc63ead: Status 404 returned error can't find the container with id 82326151015e9fe2de0bfc9eaabc8eb4078e73845570a27d4d814146bdc63ead Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.696462 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"82326151015e9fe2de0bfc9eaabc8eb4078e73845570a27d4d814146bdc63ead"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.699982 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"061632e701192adee50a9ef41e94731b42f20a0b8a94f3ccf40235879e96858c"} Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.700082 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdlbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mqfd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.701584 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-61d37cb64a22149692015dcb7451c870b4f90e4ddc0e5535e42d953b943da248 WatchSource:0}: Error finding container 61d37cb64a22149692015dcb7451c870b4f90e4ddc0e5535e42d953b943da248: Status 404 returned error can't find the container with id 61d37cb64a22149692015dcb7451c870b4f90e4ddc0e5535e42d953b943da248 Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.702841 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tddhh" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.713262 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.717446 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.722012 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72dbaca9_d010_46f5_a645_d2713a98f846.slice/crio-82326b29b91d5a390011ce7c9741038ceba7c9b6c231ac1b637948c0be0a211d WatchSource:0}: Error finding container 82326b29b91d5a390011ce7c9741038ceba7c9b6c231ac1b637948c0be0a211d: Status 404 returned error can't find the container with id 82326b29b91d5a390011ce7c9741038ceba7c9b6c231ac1b637948c0be0a211d Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.727805 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8pct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8pct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qwg8p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.729013 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.735274 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2xpcq" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.742525 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2xpcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab3d3198-2798-4180-aa5a-a0e495348125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v4gtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2xpcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.746054 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rzhgf" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.755096 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.761999 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.762770 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8e1069d-2de7-4735-9056-84d955d960e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:16:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dh5qz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:16:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wjw4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.769608 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe9d4b4_6ed6_4516_a3b9_5aaa9f447f75.slice/crio-6a1f75c75c3c20e19b4453cc3f6b2edf529debdbea663b066a00ea791ebd5931 WatchSource:0}: Error finding container 6a1f75c75c3c20e19b4453cc3f6b2edf529debdbea663b066a00ea791ebd5931: Status 404 returned error can't find the container with id 6a1f75c75c3c20e19b4453cc3f6b2edf529debdbea663b066a00ea791ebd5931 Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.771752 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3d3198_2798_4180_aa5a_a0e495348125.slice/crio-9aaddcc979920044d152a589fc02b980c36e7aa04191d55c7e1b556028f65d04 WatchSource:0}: Error finding container 9aaddcc979920044d152a589fc02b980c36e7aa04191d55c7e1b556028f65d04: Status 404 returned error can't find the container with id 9aaddcc979920044d152a589fc02b980c36e7aa04191d55c7e1b556028f65d04 Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.773696 5130 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9856d45b-d8e9-48dc-8d3b-2821b312174c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-12T16:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd154bfa81490323b7f5172029ebfba0bc643278a95dfb6bfe97ab4ae3d4c67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-12T16:15:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://79e86050c0bb4b8c438f3446d4ac411026fcf52903b4ef55a079a1dfc8e41ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79e86050c0bb4b8c438f3446d4ac411026fcf52903b4ef55a079a1dfc8e41ace\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-12T16:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-12T16:15:01Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-12T16:15:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.775356 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eed03e3_b46f_4ae0_a063_d9a0d64c3a7e.slice/crio-ccbc5cf9937e6999e434607a16a4c9b14f241174abfef891376f6a93c0d67f9a WatchSource:0}: Error finding container ccbc5cf9937e6999e434607a16a4c9b14f241174abfef891376f6a93c0d67f9a: Status 404 returned error can't find the container with id ccbc5cf9937e6999e434607a16a4c9b14f241174abfef891376f6a93c0d67f9a Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.780872 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.780935 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.780951 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.780972 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.780986 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.798162 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e1069d_2de7_4735_9056_84d955d960e2.slice/crio-9772d84159436903e9c630cf5be836e1db70ed160ca3edf443a5851baa0aed8a WatchSource:0}: Error finding container 9772d84159436903e9c630cf5be836e1db70ed160ca3edf443a5851baa0aed8a: Status 404 returned error can't find the container with id 9772d84159436903e9c630cf5be836e1db70ed160ca3edf443a5851baa0aed8a Dec 12 16:16:22 crc kubenswrapper[5130]: W1212 16:16:22.810007 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93aaac8c_bbe8_4744_9151_f486341fc9e8.slice/crio-ba06b437859831a4ba5b19dd77097aa461f0e5204c92fa041860480150a422a6 WatchSource:0}: Error finding container ba06b437859831a4ba5b19dd77097aa461f0e5204c92fa041860480150a422a6: Status 404 returned error can't find the container with id ba06b437859831a4ba5b19dd77097aa461f0e5204c92fa041860480150a422a6 Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.883339 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.883397 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.883412 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.883433 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:22 crc kubenswrapper[5130]: I1212 16:16:22.883446 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:22Z","lastTransitionTime":"2025-12-12T16:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.005816 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.006164 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.006199 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.006319 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.006331 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.034320 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.034490 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.034451485 +0000 UTC m=+83.932126317 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.035099 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.035162 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.035209 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.035236 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035426 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035462 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035488 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.03547901 +0000 UTC m=+83.933153842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035568 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035582 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035595 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035582 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.035561482 +0000 UTC m=+83.933236314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.035624 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.035616234 +0000 UTC m=+83.933291066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.036326 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.036383 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.036399 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.036616 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.036553636 +0000 UTC m=+83.934228468 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.112903 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.112970 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.112993 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.113017 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.113033 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.137129 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.137489 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: E1212 16:16:23.137583 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:24.137562572 +0000 UTC m=+84.035237404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.215922 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.215960 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.215970 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.215987 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.215998 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.318510 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.318551 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.318562 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.318576 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.318589 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.421717 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.423552 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.423703 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.423833 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.423963 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.528485 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.529197 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.529298 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.529397 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.529498 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.632538 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.632607 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.632622 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.632643 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.632657 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.709926 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"bc1de66bf26e09db288b91986d42599911062d158bfbb1ac3b362d7c3569f9e2"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.709994 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"838de4699338dd4e7d30333a949e6cb1a6138f776342c3060f514607366e6fc4"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.711769 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerStarted","Data":"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.711795 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerStarted","Data":"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.711804 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerStarted","Data":"ba06b437859831a4ba5b19dd77097aa461f0e5204c92fa041860480150a422a6"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.714050 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a" exitCode=0 Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.714137 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.714227 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"9772d84159436903e9c630cf5be836e1db70ed160ca3edf443a5851baa0aed8a"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.715984 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"3ff6e6d69ff010e8719dff16ae5ca0f54d43f2ef597b8a55b6ba516fc6ebe608"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.716015 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.716024 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"ccbc5cf9937e6999e434607a16a4c9b14f241174abfef891376f6a93c0d67f9a"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.718596 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tddhh" event={"ID":"72dbaca9-d010-46f5-a645-d2713a98f846","Type":"ContainerStarted","Data":"90b2267c0aec640ceca33135e89d342da7981f15cc923a2d5e769a6bf0a86891"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.718623 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tddhh" event={"ID":"72dbaca9-d010-46f5-a645-d2713a98f846","Type":"ContainerStarted","Data":"82326b29b91d5a390011ce7c9741038ceba7c9b6c231ac1b637948c0be0a211d"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.722330 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e22742b7fec2a37f39951331173900846034f54786404ed9c4391d68e987d30a"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.724371 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="056563fc2e85c919d860d1c8edb89c705b029425826b4e159e3c5544cc36916b" exitCode=0 Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.724458 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"056563fc2e85c919d860d1c8edb89c705b029425826b4e159e3c5544cc36916b"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.724491 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerStarted","Data":"6a1f75c75c3c20e19b4453cc3f6b2edf529debdbea663b066a00ea791ebd5931"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.726375 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2xpcq" event={"ID":"ab3d3198-2798-4180-aa5a-a0e495348125","Type":"ContainerStarted","Data":"d5a7a6c2f6d30ffd1239019561af56df424ae668775877d4745c5b4731359e97"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.726501 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2xpcq" event={"ID":"ab3d3198-2798-4180-aa5a-a0e495348125","Type":"ContainerStarted","Data":"9aaddcc979920044d152a589fc02b980c36e7aa04191d55c7e1b556028f65d04"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.729800 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"61d37cb64a22149692015dcb7451c870b4f90e4ddc0e5535e42d953b943da248"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.731833 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzhgf" event={"ID":"6625166c-6688-498a-81c5-89ec476edef2","Type":"ContainerStarted","Data":"afec02ecdbcab7dac8db37c3a4ff38d4b68bab32ea1f47c40b0bb4f77a533698"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.731918 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzhgf" event={"ID":"6625166c-6688-498a-81c5-89ec476edef2","Type":"ContainerStarted","Data":"525c60c926ecb19e50c13aa3dbaf4c1acef9db6fabadf5bbc0411b8f0b0aa91a"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.738446 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.738506 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.738551 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.738573 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.738586 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.807881 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.807857657 podStartE2EDuration="2.807857657s" podCreationTimestamp="2025-12-12 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:23.807824147 +0000 UTC m=+83.705498999" watchObservedRunningTime="2025-12-12 16:16:23.807857657 +0000 UTC m=+83.705532489" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.842802 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.843271 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.843281 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.843299 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.843308 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.855514 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.8554951710000003 podStartE2EDuration="2.855495171s" podCreationTimestamp="2025-12-12 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:23.851537894 +0000 UTC m=+83.749212736" watchObservedRunningTime="2025-12-12 16:16:23.855495171 +0000 UTC m=+83.753170003" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.910973 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.9109534740000003 podStartE2EDuration="2.910953474s" podCreationTimestamp="2025-12-12 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:23.889646214 +0000 UTC m=+83.787321046" watchObservedRunningTime="2025-12-12 16:16:23.910953474 +0000 UTC m=+83.808628306" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.945425 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.945470 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.945481 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.945496 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:23 crc kubenswrapper[5130]: I1212 16:16:23.945512 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:23Z","lastTransitionTime":"2025-12-12T16:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.034039 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=3.034017999 podStartE2EDuration="3.034017999s" podCreationTimestamp="2025-12-12 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.018627523 +0000 UTC m=+83.916302345" watchObservedRunningTime="2025-12-12 16:16:24.034017999 +0000 UTC m=+83.931692831" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.048765 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.048915 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.048951 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.048973 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.048992 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049073 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049126 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.049113188 +0000 UTC m=+85.946788020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049457 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.049448456 +0000 UTC m=+85.947123288 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049562 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049575 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049585 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049611 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.04960395 +0000 UTC m=+85.947278782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049651 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049672 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.049666241 +0000 UTC m=+85.947341073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049709 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049717 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049726 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.049747 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.049740993 +0000 UTC m=+85.947415825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.077064 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.077134 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.077149 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.077171 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.077199 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.149713 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.149835 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.149889 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:26.149875788 +0000 UTC m=+86.047550620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.180025 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.180598 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.180617 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.180640 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.180659 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.286502 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.286574 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.286590 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.286614 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.286626 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.297344 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rzhgf" podStartSLOduration=65.297317916 podStartE2EDuration="1m5.297317916s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.269397856 +0000 UTC m=+84.167072688" watchObservedRunningTime="2025-12-12 16:16:24.297317916 +0000 UTC m=+84.194992748" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.314349 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tddhh" podStartSLOduration=65.314327362 podStartE2EDuration="1m5.314327362s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.31425265 +0000 UTC m=+84.211927512" watchObservedRunningTime="2025-12-12 16:16:24.314327362 +0000 UTC m=+84.212002194" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.315241 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" podStartSLOduration=64.315233834 podStartE2EDuration="1m4.315233834s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.296792954 +0000 UTC m=+84.194467786" watchObservedRunningTime="2025-12-12 16:16:24.315233834 +0000 UTC m=+84.212908666" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.387095 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.387117 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.387247 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.387309 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.387483 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.387602 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.387661 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:24 crc kubenswrapper[5130]: E1212 16:16:24.387757 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.388471 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.388503 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.388511 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.388524 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.388534 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.394374 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.395352 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.397879 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.399546 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.402260 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.404103 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.406377 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.409975 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.410642 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.411918 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.414585 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.416096 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.416854 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.419715 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.420481 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.423048 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.423960 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.426050 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.433846 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.436845 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.437805 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.441061 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.442009 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podStartSLOduration=65.441974078 podStartE2EDuration="1m5.441974078s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.440634375 +0000 UTC m=+84.338309207" watchObservedRunningTime="2025-12-12 16:16:24.441974078 +0000 UTC m=+84.339648910" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.446948 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.448280 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.451582 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.452649 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.454491 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.455930 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.459861 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.461281 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2xpcq" podStartSLOduration=65.461265149 podStartE2EDuration="1m5.461265149s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:24.461156006 +0000 UTC m=+84.358830838" watchObservedRunningTime="2025-12-12 16:16:24.461265149 +0000 UTC m=+84.358939981" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.462325 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.463829 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.465299 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.467138 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.468124 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.475698 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.476514 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.479858 5130 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.480010 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.485599 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.486930 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.487997 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.489453 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.490018 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.494987 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.495985 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.496978 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.497034 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.497048 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.497070 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.497140 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.497045 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.498474 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.500870 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.503465 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.504782 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.506211 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.507348 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.508580 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.509989 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.512051 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.513554 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.514565 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.516025 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.600510 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.600583 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.600599 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.600626 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.600642 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.702858 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.702905 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.702914 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.702929 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.702938 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.738599 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="042e678b6536164d5f99ad688025cbdb1906dd353887a02db6028699e1944b69" exitCode=0 Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.738843 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"042e678b6536164d5f99ad688025cbdb1906dd353887a02db6028699e1944b69"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.745118 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.745166 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.745205 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.745223 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.745237 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.806792 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.806868 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.806881 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.806902 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.806916 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.908851 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.908905 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.908916 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.908934 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:24 crc kubenswrapper[5130]: I1212 16:16:24.908947 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:24Z","lastTransitionTime":"2025-12-12T16:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.011101 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.011160 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.011193 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.011216 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.011232 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.113918 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.113967 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.113980 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.114003 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.114022 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.216105 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.216152 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.216162 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.216204 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.216217 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.318988 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.319053 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.319068 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.319088 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.319102 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.421316 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.421844 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.421859 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.421878 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.421894 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.524385 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.524437 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.524451 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.524470 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.524481 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.626316 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.626349 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.626357 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.626370 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.626382 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.729569 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.729625 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.729646 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.729664 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.729678 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.753749 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.756873 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="22914ef91cdb64d38387ce525dce647cfbb4bec0b249d30b4f26c417cafc09ce" exitCode=0 Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.756918 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"22914ef91cdb64d38387ce525dce647cfbb4bec0b249d30b4f26c417cafc09ce"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.834428 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.834470 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.834481 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.834496 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.834505 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.937902 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.937962 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.937973 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.937995 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:25 crc kubenswrapper[5130]: I1212 16:16:25.938010 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:25Z","lastTransitionTime":"2025-12-12T16:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.040798 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.040866 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.040880 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.040896 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.040909 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.107241 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.107393 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.107444 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107522 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.107491651 +0000 UTC m=+90.005166483 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107535 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.107562 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.107611 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107616 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107641 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107651 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.107623724 +0000 UTC m=+90.005298556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107656 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107712 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107722 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107725 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.107713406 +0000 UTC m=+90.005388398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107729 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107654 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107764 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.107757127 +0000 UTC m=+90.005431959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.107777 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.107771538 +0000 UTC m=+90.005446600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.143935 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.144388 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.144493 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.144581 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.144661 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.209322 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.209973 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.210239 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:30.210170328 +0000 UTC m=+90.107845170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.247353 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.247405 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.247415 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.247430 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.247440 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.351103 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.351192 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.351207 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.351228 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.351244 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.368962 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.368962 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.369162 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.369162 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.369352 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.369430 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.369553 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:26 crc kubenswrapper[5130]: E1212 16:16:26.369647 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.453290 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.453340 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.453352 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.453370 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.453382 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.554864 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.554901 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.554910 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.554924 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.554934 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.656670 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.656713 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.656726 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.656742 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.656752 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.758865 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.758912 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.758924 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.758939 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.758948 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.762462 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"85af87ef61e5549e576dcb527568c20bb045e303b6416006b2782894749c8a82"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.766050 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="2228de07bcd1287666268a355bbe38433d3d7e11fda8b84930c6d5fa09fb0b52" exitCode=0 Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.766081 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"2228de07bcd1287666268a355bbe38433d3d7e11fda8b84930c6d5fa09fb0b52"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.863405 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.863441 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.863450 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.863463 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.863473 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.965988 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.966048 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.966067 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.966082 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:26 crc kubenswrapper[5130]: I1212 16:16:26.966093 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:26Z","lastTransitionTime":"2025-12-12T16:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.069031 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.069088 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.069102 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.069118 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.069129 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.171735 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.171779 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.171788 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.171802 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.171812 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.273486 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.273576 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.273590 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.273606 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.273617 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.376481 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.376526 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.376537 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.376552 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.376561 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.478652 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.478719 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.478742 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.478764 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.478778 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.581009 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.581058 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.581070 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.581089 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.581101 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.683562 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.683616 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.683630 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.683649 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.683662 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.772643 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.775645 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="d3a6d3a044ef65a7103bf326f57acda0f5c6399d8d794f42e5c8570161db2c95" exitCode=0 Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.775734 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"d3a6d3a044ef65a7103bf326f57acda0f5c6399d8d794f42e5c8570161db2c95"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.785118 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.785160 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.785171 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.785196 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.785205 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.888059 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.888137 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.888159 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.888213 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.888239 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.991353 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.991411 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.991431 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.991449 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:27 crc kubenswrapper[5130]: I1212 16:16:27.991463 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:27Z","lastTransitionTime":"2025-12-12T16:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.093759 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.094098 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.094108 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.094124 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.094134 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.195947 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.195990 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.195999 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.196015 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.196026 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.298324 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.298374 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.298384 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.298409 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.298419 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.374654 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.374804 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:28 crc kubenswrapper[5130]: E1212 16:16:28.374824 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:28 crc kubenswrapper[5130]: E1212 16:16:28.374933 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.374953 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.374999 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:28 crc kubenswrapper[5130]: E1212 16:16:28.375098 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:28 crc kubenswrapper[5130]: E1212 16:16:28.375316 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.400965 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.401023 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.401035 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.401054 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.401069 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.504311 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.504373 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.504392 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.504415 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.504434 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.608081 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.608157 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.608172 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.608220 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.608241 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.705992 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.706105 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.706120 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.706146 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.706164 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.731917 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.731992 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.732008 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.732032 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.732049 5130 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T16:16:28Z","lastTransitionTime":"2025-12-12T16:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 12 16:16:28 crc kubenswrapper[5130]: I1212 16:16:28.761700 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps"] Dec 12 16:16:29 crc kubenswrapper[5130]: I1212 16:16:29.390406 5130 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 16:16:29 crc kubenswrapper[5130]: I1212 16:16:29.400241 5130 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:16:29 crc kubenswrapper[5130]: I1212 16:16:29.793412 5130 generic.go:358] "Generic (PLEG): container finished" podID="fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75" containerID="8f4a4ece4a6b06f4774019d21ef826082bab34e134afd60df80429b03ef272b3" exitCode=0 Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.155126 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155344 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.155308455 +0000 UTC m=+98.052983287 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.155411 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.155539 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.155595 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.155638 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155652 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155680 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155699 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155735 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155787 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155789 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.155763286 +0000 UTC m=+98.053438128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155837 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155888 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155901 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155850 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.155838878 +0000 UTC m=+98.053513920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155966 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.155957041 +0000 UTC m=+98.053631873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.155977 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.155972041 +0000 UTC m=+98.053646873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.256952 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.257553 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.257692 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:38.257664884 +0000 UTC m=+98.155339726 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.343684 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerStarted","Data":"8f4a4ece4a6b06f4774019d21ef826082bab34e134afd60df80429b03ef272b3"} Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.343818 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerDied","Data":"8f4a4ece4a6b06f4774019d21ef826082bab34e134afd60df80429b03ef272b3"} Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.344507 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.344770 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.344960 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.344992 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.345166 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.345281 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.345342 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.345690 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:30 crc kubenswrapper[5130]: E1212 16:16:30.345810 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.349043 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.350282 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.350565 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.350895 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.458604 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be77ab3-0638-4ffa-960a-34823c8e08a1-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.458659 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3be77ab3-0638-4ffa-960a-34823c8e08a1-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.458744 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be77ab3-0638-4ffa-960a-34823c8e08a1-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.459580 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.459762 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560572 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560644 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be77ab3-0638-4ffa-960a-34823c8e08a1-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560666 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3be77ab3-0638-4ffa-960a-34823c8e08a1-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560683 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be77ab3-0638-4ffa-960a-34823c8e08a1-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560715 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.560997 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.561290 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3be77ab3-0638-4ffa-960a-34823c8e08a1-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.562592 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3be77ab3-0638-4ffa-960a-34823c8e08a1-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.567234 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be77ab3-0638-4ffa-960a-34823c8e08a1-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.579317 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be77ab3-0638-4ffa-960a-34823c8e08a1-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-d85ps\" (UID: \"3be77ab3-0638-4ffa-960a-34823c8e08a1\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.661298 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" Dec 12 16:16:30 crc kubenswrapper[5130]: W1212 16:16:30.675146 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be77ab3_0638_4ffa_960a_34823c8e08a1.slice/crio-3402e31f980c27adba853472e69d120d6ad794409bf1956f3e0dc97b1e0d2ca1 WatchSource:0}: Error finding container 3402e31f980c27adba853472e69d120d6ad794409bf1956f3e0dc97b1e0d2ca1: Status 404 returned error can't find the container with id 3402e31f980c27adba853472e69d120d6ad794409bf1956f3e0dc97b1e0d2ca1 Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.811901 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerStarted","Data":"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3"} Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.812613 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.812666 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.812757 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.818855 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" event={"ID":"3be77ab3-0638-4ffa-960a-34823c8e08a1","Type":"ContainerStarted","Data":"01781532fca08ef807ebb7b6537744297a0b22c60d8a830a6142b030b3e4999c"} Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.818950 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" event={"ID":"3be77ab3-0638-4ffa-960a-34823c8e08a1","Type":"ContainerStarted","Data":"3402e31f980c27adba853472e69d120d6ad794409bf1956f3e0dc97b1e0d2ca1"} Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.853463 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.854727 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.871087 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podStartSLOduration=71.87105999 podStartE2EDuration="1m11.87105999s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:30.853943132 +0000 UTC m=+90.751617974" watchObservedRunningTime="2025-12-12 16:16:30.87105999 +0000 UTC m=+90.768734832" Dec 12 16:16:30 crc kubenswrapper[5130]: I1212 16:16:30.907436 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-d85ps" podStartSLOduration=71.907411237 podStartE2EDuration="1m11.907411237s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:30.871342567 +0000 UTC m=+90.769017399" watchObservedRunningTime="2025-12-12 16:16:30.907411237 +0000 UTC m=+90.805086079" Dec 12 16:16:31 crc kubenswrapper[5130]: I1212 16:16:31.381629 5130 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:31 crc kubenswrapper[5130]: I1212 16:16:31.825883 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" event={"ID":"fbe9d4b4-6ed6-4516-a3b9-5aaa9f447f75","Type":"ContainerStarted","Data":"926b4387a88be0645fdaf2700b1c8f56df3294d7b61e4359f02a5b6f4ae7bd7a"} Dec 12 16:16:31 crc kubenswrapper[5130]: I1212 16:16:31.848811 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mqfd8" podStartSLOduration=72.84879595 podStartE2EDuration="1m12.84879595s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:31.847898858 +0000 UTC m=+91.745573690" watchObservedRunningTime="2025-12-12 16:16:31.84879595 +0000 UTC m=+91.746470782" Dec 12 16:16:32 crc kubenswrapper[5130]: I1212 16:16:32.369275 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:32 crc kubenswrapper[5130]: I1212 16:16:32.369324 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:32 crc kubenswrapper[5130]: I1212 16:16:32.369275 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:32 crc kubenswrapper[5130]: E1212 16:16:32.369470 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:32 crc kubenswrapper[5130]: E1212 16:16:32.369637 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:32 crc kubenswrapper[5130]: I1212 16:16:32.369681 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:32 crc kubenswrapper[5130]: E1212 16:16:32.369742 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:32 crc kubenswrapper[5130]: E1212 16:16:32.369803 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:33 crc kubenswrapper[5130]: I1212 16:16:33.371348 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:33 crc kubenswrapper[5130]: E1212 16:16:33.371552 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 12 16:16:33 crc kubenswrapper[5130]: I1212 16:16:33.596830 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jhhcn"] Dec 12 16:16:33 crc kubenswrapper[5130]: I1212 16:16:33.597352 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:33 crc kubenswrapper[5130]: E1212 16:16:33.597467 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:34 crc kubenswrapper[5130]: I1212 16:16:34.369396 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:34 crc kubenswrapper[5130]: I1212 16:16:34.369426 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:34 crc kubenswrapper[5130]: E1212 16:16:34.369518 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:34 crc kubenswrapper[5130]: E1212 16:16:34.369587 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:34 crc kubenswrapper[5130]: I1212 16:16:34.369612 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:34 crc kubenswrapper[5130]: E1212 16:16:34.369658 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:35 crc kubenswrapper[5130]: I1212 16:16:35.369728 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:35 crc kubenswrapper[5130]: E1212 16:16:35.369886 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:36 crc kubenswrapper[5130]: I1212 16:16:36.369584 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:36 crc kubenswrapper[5130]: I1212 16:16:36.369625 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:36 crc kubenswrapper[5130]: I1212 16:16:36.369625 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:36 crc kubenswrapper[5130]: E1212 16:16:36.370045 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:36 crc kubenswrapper[5130]: E1212 16:16:36.370108 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:36 crc kubenswrapper[5130]: E1212 16:16:36.369912 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:37 crc kubenswrapper[5130]: I1212 16:16:37.368805 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:37 crc kubenswrapper[5130]: E1212 16:16:37.369677 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jhhcn" podUID="4e8bbb2d-9d91-4541-a2d2-891ab81dd883" Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.249056 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249342 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.249281624 +0000 UTC m=+114.146956456 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.249452 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.249502 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.249542 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249740 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249779 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249791 5130 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249872 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.249850277 +0000 UTC m=+114.147525320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249875 5130 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.249944 5130 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.249964 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250026 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.249981851 +0000 UTC m=+114.147656803 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250028 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250096 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.250081803 +0000 UTC m=+114.147756625 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250119 5130 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250147 5130 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.250283 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.250261368 +0000 UTC m=+114.147936380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.351714 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.351886 5130 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.351956 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs podName:4e8bbb2d-9d91-4541-a2d2-891ab81dd883 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.35193633 +0000 UTC m=+114.249611162 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs") pod "network-metrics-daemon-jhhcn" (UID: "4e8bbb2d-9d91-4541-a2d2-891ab81dd883") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.369535 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.369587 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:38 crc kubenswrapper[5130]: I1212 16:16:38.369810 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.369953 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.370377 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 12 16:16:38 crc kubenswrapper[5130]: E1212 16:16:38.370510 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.314554 5130 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.314719 5130 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.346324 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-62rws"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.834801 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dmjfw"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.834961 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.837950 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.838391 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.838462 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.838890 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.839217 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.839651 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.843999 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.844085 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.844102 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.844094 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.844231 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.844499 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.847312 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.847660 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.847764 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.848470 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849048 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849113 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849148 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849593 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849826 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.849932 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.850358 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.851145 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.853490 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-njgb5"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.853719 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.856549 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.856623 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.856549 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.857768 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.857955 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.859472 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.859857 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862163 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862211 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862340 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862498 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862687 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862515 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.862936 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.865138 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.865282 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.865345 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.865142 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.865153 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.866800 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.866922 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.867975 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.868266 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-brfdj"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.868452 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.868577 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.868622 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.868680 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869223 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869254 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-images\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869387 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abafdd2-351e-4f65-9dea-5578d313b760-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869429 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-config\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869639 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9c8\" (UniqueName: \"kubernetes.io/projected/1bfafc57-4718-4d71-9f69-52b321379a27-kube-api-access-pp9c8\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869681 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-config\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869760 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-audit-policies\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869807 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-trusted-ca-bundle\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869839 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869881 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59rj\" (UniqueName: \"kubernetes.io/projected/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-kube-api-access-h59rj\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869919 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ldm\" (UniqueName: \"kubernetes.io/projected/0abafdd2-351e-4f65-9dea-5578d313b760-kube-api-access-s5ldm\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.869943 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870087 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-encryption-config\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870218 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-auth-proxy-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870260 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-serving-cert\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870351 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlmd\" (UniqueName: \"kubernetes.io/projected/6f56ef95-299c-4bae-bc46-92e9d8358097-kube-api-access-txlmd\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870412 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-client\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870442 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-serving-ca\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870466 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870501 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bfafc57-4718-4d71-9f69-52b321379a27-audit-dir\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870537 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f56ef95-299c-4bae-bc46-92e9d8358097-machine-approver-tls\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870545 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870650 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870662 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870828 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.870983 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.872790 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.873674 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.873687 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.872986 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.874066 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.882130 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.883301 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.883663 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.883685 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.883784 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.885891 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-zhgm9"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.886874 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.890911 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.890949 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.890963 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.890981 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.891226 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.892664 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5tw72"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.893512 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.895631 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.895820 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.895916 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.895955 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.896273 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.896374 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.896701 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.898996 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.899460 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.899533 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.899839 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.900201 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.900516 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.901058 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.898997 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.901555 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.901865 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.903924 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.903974 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.904347 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.906766 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.906784 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907068 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907346 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907348 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907562 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907684 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907787 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907689 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907975 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.907976 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.908272 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.908977 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.912985 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.913492 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.913820 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.913955 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.914002 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.914001 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.914345 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.914429 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.914665 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.915875 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.924547 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.927693 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.928258 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.931895 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.936057 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.939499 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.942725 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.948573 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.957827 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.960952 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.961071 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.961615 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.964943 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.965221 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.968786 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.969783 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971094 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971334 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhxj\" (UniqueName: \"kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971367 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971390 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a6a238-12c9-43ae-afbc-f9595d46e727-serving-cert\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971414 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a6a238-12c9-43ae-afbc-f9595d46e727-kube-api-access\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971437 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22a6a238-12c9-43ae-afbc-f9595d46e727-tmp-dir\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971466 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-trusted-ca-bundle\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971492 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971553 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971590 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c111429-5512-4d9c-898b-d3ec0bdb5d08-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971644 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4g5m\" (UniqueName: \"kubernetes.io/projected/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-kube-api-access-h4g5m\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971739 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971781 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.971969 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972078 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-console-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972206 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-service-ca\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972330 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7fxv\" (UniqueName: \"kubernetes.io/projected/65efae24-6623-454c-b665-e5e407e86269-kube-api-access-v7fxv\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972453 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h59rj\" (UniqueName: \"kubernetes.io/projected/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-kube-api-access-h59rj\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972463 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-trusted-ca-bundle\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972555 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972664 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8c4\" (UniqueName: \"kubernetes.io/projected/2a282672-c872-405b-9325-f8f48865334c-kube-api-access-rc8c4\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972701 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972729 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972727 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972781 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.972932 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-oauth-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973005 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ldm\" (UniqueName: \"kubernetes.io/projected/0abafdd2-351e-4f65-9dea-5578d313b760-kube-api-access-s5ldm\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973037 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973063 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973127 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c111429-5512-4d9c-898b-d3ec0bdb5d08-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973158 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973227 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c111429-5512-4d9c-898b-d3ec0bdb5d08-config\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973253 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-config\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973324 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-encryption-config\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973393 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-config\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973451 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973490 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973588 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj55g\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-kube-api-access-rj55g\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973804 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-auth-proxy-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973852 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973910 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-serving-cert\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.973942 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974003 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-serving-cert\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974034 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c111429-5512-4d9c-898b-d3ec0bdb5d08-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974084 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974137 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-txlmd\" (UniqueName: \"kubernetes.io/projected/6f56ef95-299c-4bae-bc46-92e9d8358097-kube-api-access-txlmd\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974171 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-client\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974255 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-serving-ca\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974305 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a282672-c872-405b-9325-f8f48865334c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974355 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv2mw\" (UniqueName: \"kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974411 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974440 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-trusted-ca\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974500 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bfafc57-4718-4d71-9f69-52b321379a27-audit-dir\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974532 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8kx\" (UniqueName: \"kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974556 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a6a238-12c9-43ae-afbc-f9595d46e727-config\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974591 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f56ef95-299c-4bae-bc46-92e9d8358097-machine-approver-tls\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974628 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974901 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bfafc57-4718-4d71-9f69-52b321379a27-audit-dir\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.974996 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-images\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975053 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f56ef95-299c-4bae-bc46-92e9d8358097-auth-proxy-config\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975058 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abafdd2-351e-4f65-9dea-5578d313b760-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975085 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975235 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-config\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975354 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-serving-ca\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975534 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975680 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975712 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975740 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975789 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-trusted-ca-bundle\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975814 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65efae24-6623-454c-b665-e5e407e86269-serving-cert\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975836 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.975917 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-images\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976065 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9c8\" (UniqueName: \"kubernetes.io/projected/1bfafc57-4718-4d71-9f69-52b321379a27-kube-api-access-pp9c8\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976105 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-config\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976137 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976211 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bw6\" (UniqueName: \"kubernetes.io/projected/4651322b-9aec-4667-afa3-1602ad5176fe-kube-api-access-z4bw6\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976234 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976257 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976277 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d55f43e2-46df-4460-b17f-0daa75b89154-serving-cert\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976296 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75mj\" (UniqueName: \"kubernetes.io/projected/d55f43e2-46df-4460-b17f-0daa75b89154-kube-api-access-l75mj\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976317 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976339 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976359 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnbw\" (UniqueName: \"kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976378 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-oauth-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976397 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-config\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976417 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976446 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-audit-policies\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976468 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.976772 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abafdd2-351e-4f65-9dea-5578d313b760-config\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.977055 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.977242 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.977362 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-config\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.977451 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bfafc57-4718-4d71-9f69-52b321379a27-audit-policies\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.978236 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.981786 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-encryption-config\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.981991 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-serving-cert\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.982414 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f56ef95-299c-4bae-bc46-92e9d8358097-machine-approver-tls\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.982529 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0abafdd2-351e-4f65-9dea-5578d313b760-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.982712 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-sm46g"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.982940 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bfafc57-4718-4d71-9f69-52b321379a27-etcd-client\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.982871 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.983383 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.988337 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-49zmj"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.988552 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.993905 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.994030 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.997076 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-sg8rq"] Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.998778 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 16:16:39 crc kubenswrapper[5130]: I1212 16:16:39.998895 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.002115 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.002480 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.005421 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.005601 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.010822 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.014466 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.015597 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.015606 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.018861 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.019045 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2w9hn"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.019133 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.024131 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.024268 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.027827 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dmjfw"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.028020 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.028281 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.032081 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.032267 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.037791 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.038612 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xks9x"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.038771 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.043538 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-gsm6t"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.043635 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.047763 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.047856 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.052197 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.052612 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.057199 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.057232 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.057249 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-bqttx"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.057457 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060044 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-njgb5"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060071 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-zhgm9"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060082 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5tw72"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060093 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060110 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tqcqf"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.060241 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.064133 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.064156 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-q8kdt"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.064272 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069048 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-brfdj"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069114 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069134 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069148 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069165 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069168 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069226 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.069278 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nwxp2"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072793 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-sm46g"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072819 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072829 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072845 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-sg8rq"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072855 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072865 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072873 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2w9hn"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072882 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072892 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072901 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072909 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072922 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072931 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xks9x"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072941 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-49zmj"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072951 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072962 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072973 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072983 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.072990 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.073003 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.073234 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.073257 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.073273 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gsm6t"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.073291 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-59hhc"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078208 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078279 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078304 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078341 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078362 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078383 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078407 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-trusted-ca-bundle\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078422 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65efae24-6623-454c-b665-e5e407e86269-serving-cert\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078474 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078498 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594n6\" (UniqueName: \"kubernetes.io/projected/124ec2f9-0e23-47da-b25f-66a13947465e-kube-api-access-594n6\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078517 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078534 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078548 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078668 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078702 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078735 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4bw6\" (UniqueName: \"kubernetes.io/projected/4651322b-9aec-4667-afa3-1602ad5176fe-kube-api-access-z4bw6\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078776 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.078808 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/338f89a1-1c2f-4e37-9572-c5b13d682ca9-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.080200 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.080210 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rl44g"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.080428 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.080493 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.080527 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d55f43e2-46df-4460-b17f-0daa75b89154-serving-cert\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081251 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l75mj\" (UniqueName: \"kubernetes.io/projected/d55f43e2-46df-4460-b17f-0daa75b89154-kube-api-access-l75mj\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081312 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081388 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081435 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081530 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-csnbw\" (UniqueName: \"kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081590 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-oauth-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081676 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-config\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081710 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081757 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081799 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081788 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.081999 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.082223 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.082465 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.082518 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.082604 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhxj\" (UniqueName: \"kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.083499 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.083732 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-config\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.083953 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084013 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a6a238-12c9-43ae-afbc-f9595d46e727-serving-cert\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084054 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a6a238-12c9-43ae-afbc-f9595d46e727-kube-api-access\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084091 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22a6a238-12c9-43ae-afbc-f9595d46e727-tmp-dir\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084142 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084200 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084445 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c111429-5512-4d9c-898b-d3ec0bdb5d08-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084809 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-trusted-ca-bundle\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084898 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/22a6a238-12c9-43ae-afbc-f9595d46e727-tmp-dir\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085034 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c111429-5512-4d9c-898b-d3ec0bdb5d08-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.084521 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4g5m\" (UniqueName: \"kubernetes.io/projected/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-kube-api-access-h4g5m\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085529 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085564 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085598 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085638 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh9pn\" (UniqueName: \"kubernetes.io/projected/6baa2db5-b688-47dd-8d81-7dadbbbd3759-kube-api-access-lh9pn\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085671 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085701 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-console-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085730 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-service-ca\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085764 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7fxv\" (UniqueName: \"kubernetes.io/projected/65efae24-6623-454c-b665-e5e407e86269-kube-api-access-v7fxv\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085795 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085825 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085867 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8c4\" (UniqueName: \"kubernetes.io/projected/2a282672-c872-405b-9325-f8f48865334c-kube-api-access-rc8c4\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085898 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085930 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085959 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.085983 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.086014 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.086044 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.086073 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/124ec2f9-0e23-47da-b25f-66a13947465e-tmpfs\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.086103 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-config\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.086553 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087203 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-service-ca\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087358 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087378 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087668 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgtw\" (UniqueName: \"kubernetes.io/projected/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-kube-api-access-dlgtw\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087738 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087780 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-oauth-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087822 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087829 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d55f43e2-46df-4460-b17f-0daa75b89154-serving-cert\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087857 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087954 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c111429-5512-4d9c-898b-d3ec0bdb5d08-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.087991 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdt5\" (UniqueName: \"kubernetes.io/projected/f967d508-b683-4df4-9be0-3a7fb5afa7bb-kube-api-access-5qdt5\") pod \"downloads-747b44746d-sm46g\" (UID: \"f967d508-b683-4df4-9be0-3a7fb5afa7bb\") " pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088032 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c111429-5512-4d9c-898b-d3ec0bdb5d08-config\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088067 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-config\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088131 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4xfb\" (UniqueName: \"kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088168 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j957x\" (UniqueName: \"kubernetes.io/projected/47102097-389c-44ce-a25f-6b8d25a70e1d-kube-api-access-j957x\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088394 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088763 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.088809 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-oauth-serving-cert\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.089035 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-config\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093082 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-config\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.089170 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.089907 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4651322b-9aec-4667-afa3-1602ad5176fe-console-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093154 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfvwr\" (UniqueName: \"kubernetes.io/projected/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-kube-api-access-jfvwr\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.090005 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.089941 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c111429-5512-4d9c-898b-d3ec0bdb5d08-config\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.089679 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093266 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093363 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093373 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tqcqf"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093410 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-59hhc"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093417 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-srv-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093430 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rl44g"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093528 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093576 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rj55g\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-kube-api-access-rj55g\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093616 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093531 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093617 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/338f89a1-1c2f-4e37-9572-c5b13d682ca9-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093888 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093920 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093952 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-serving-cert\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.093977 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzkz\" (UniqueName: \"kubernetes.io/projected/1999cfc6-e5a0-4ddb-883d-71f861b286a8-kube-api-access-dkzkz\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094002 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xjv\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-kube-api-access-z2xjv\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094068 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-config\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094097 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094154 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c111429-5512-4d9c-898b-d3ec0bdb5d08-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094337 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094386 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094420 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094492 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a282672-c872-405b-9325-f8f48865334c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094558 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wv2mw\" (UniqueName: \"kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094598 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094636 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-trusted-ca\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094684 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8kx\" (UniqueName: \"kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094715 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a6a238-12c9-43ae-afbc-f9595d46e727-config\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.094811 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22a6a238-12c9-43ae-afbc-f9595d46e727-serving-cert\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.095278 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.095478 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.095927 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55f43e2-46df-4460-b17f-0daa75b89154-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.095999 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22a6a238-12c9-43ae-afbc-f9595d46e727-config\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.096234 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65efae24-6623-454c-b665-e5e407e86269-trusted-ca\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.096539 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65efae24-6623-454c-b665-e5e407e86269-serving-cert\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.096720 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.096961 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.097896 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.098401 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.098500 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.098672 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4651322b-9aec-4667-afa3-1602ad5176fe-console-oauth-config\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.101370 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.101556 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c111429-5512-4d9c-898b-d3ec0bdb5d08-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.101592 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.101853 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.101890 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.102161 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-serving-cert\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.103811 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a282672-c872-405b-9325-f8f48865334c-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.104609 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.117511 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.137567 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.148018 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.158425 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.177816 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196124 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196202 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196254 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196406 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lh9pn\" (UniqueName: \"kubernetes.io/projected/6baa2db5-b688-47dd-8d81-7dadbbbd3759-kube-api-access-lh9pn\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196437 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196463 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196489 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196573 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196603 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196628 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/124ec2f9-0e23-47da-b25f-66a13947465e-tmpfs\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196655 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-config\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196657 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196684 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgtw\" (UniqueName: \"kubernetes.io/projected/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-kube-api-access-dlgtw\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196778 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdt5\" (UniqueName: \"kubernetes.io/projected/f967d508-b683-4df4-9be0-3a7fb5afa7bb-kube-api-access-5qdt5\") pod \"downloads-747b44746d-sm46g\" (UID: \"f967d508-b683-4df4-9be0-3a7fb5afa7bb\") " pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196857 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4xfb\" (UniqueName: \"kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196890 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j957x\" (UniqueName: \"kubernetes.io/projected/47102097-389c-44ce-a25f-6b8d25a70e1d-kube-api-access-j957x\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196937 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfvwr\" (UniqueName: \"kubernetes.io/projected/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-kube-api-access-jfvwr\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.196977 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197257 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-srv-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197316 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/338f89a1-1c2f-4e37-9572-c5b13d682ca9-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197349 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197387 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzkz\" (UniqueName: \"kubernetes.io/projected/1999cfc6-e5a0-4ddb-883d-71f861b286a8-kube-api-access-dkzkz\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197413 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xjv\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-kube-api-access-z2xjv\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197452 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197478 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197613 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-594n6\" (UniqueName: \"kubernetes.io/projected/124ec2f9-0e23-47da-b25f-66a13947465e-kube-api-access-594n6\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197654 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197680 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197693 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.197729 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/338f89a1-1c2f-4e37-9572-c5b13d682ca9-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.198151 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.198302 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/124ec2f9-0e23-47da-b25f-66a13947465e-tmpfs\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.198353 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.201105 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.203901 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.218735 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.231329 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/124ec2f9-0e23-47da-b25f-66a13947465e-srv-cert\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.237477 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.257376 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.277135 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.298866 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.316968 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.353467 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59rj\" (UniqueName: \"kubernetes.io/projected/e0a1decf-4248-4f48-ba06-e9ec8fdbbea8-kube-api-access-h59rj\") pod \"openshift-apiserver-operator-846cbfc458-zf8cv\" (UID: \"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.376598 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ldm\" (UniqueName: \"kubernetes.io/projected/0abafdd2-351e-4f65-9dea-5578d313b760-kube-api-access-s5ldm\") pod \"machine-api-operator-755bb95488-dmjfw\" (UID: \"0abafdd2-351e-4f65-9dea-5578d313b760\") " pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.396933 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlmd\" (UniqueName: \"kubernetes.io/projected/6f56ef95-299c-4bae-bc46-92e9d8358097-kube-api-access-txlmd\") pod \"machine-approver-54c688565-62rws\" (UID: \"6f56ef95-299c-4bae-bc46-92e9d8358097\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.414793 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9c8\" (UniqueName: \"kubernetes.io/projected/1bfafc57-4718-4d71-9f69-52b321379a27-kube-api-access-pp9c8\") pod \"apiserver-8596bd845d-njgb5\" (UID: \"1bfafc57-4718-4d71-9f69-52b321379a27\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.417638 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.437706 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.442091 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/338f89a1-1c2f-4e37-9572-c5b13d682ca9-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.450227 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.457947 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 16:16:40 crc kubenswrapper[5130]: W1212 16:16:40.469819 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f56ef95_299c_4bae_bc46_92e9d8358097.slice/crio-578b71735475f356b2d2391d0734c09f3a3e8ab112ecf5d21f471ef11ddb1c81 WatchSource:0}: Error finding container 578b71735475f356b2d2391d0734c09f3a3e8ab112ecf5d21f471ef11ddb1c81: Status 404 returned error can't find the container with id 578b71735475f356b2d2391d0734c09f3a3e8ab112ecf5d21f471ef11ddb1c81 Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.485500 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.489910 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/338f89a1-1c2f-4e37-9572-c5b13d682ca9-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.501322 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.518150 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.518994 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.528111 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.532620 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.536715 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.537960 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.557814 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.577895 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.598973 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.608867 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-config\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.618317 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.639419 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.658878 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.684006 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.698985 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.719717 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.739631 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.758456 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.779668 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.798282 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.818525 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.845494 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.853834 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-dmjfw"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.857432 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.864127 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" event={"ID":"6f56ef95-299c-4bae-bc46-92e9d8358097","Type":"ContainerStarted","Data":"578b71735475f356b2d2391d0734c09f3a3e8ab112ecf5d21f471ef11ddb1c81"} Dec 12 16:16:40 crc kubenswrapper[5130]: W1212 16:16:40.865985 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0abafdd2_351e_4f65_9dea_5578d313b760.slice/crio-46d4077c2585bacb39949d15d21428f3e181029f32d32ad523532640b0b77944 WatchSource:0}: Error finding container 46d4077c2585bacb39949d15d21428f3e181029f32d32ad523532640b0b77944: Status 404 returned error can't find the container with id 46d4077c2585bacb39949d15d21428f3e181029f32d32ad523532640b0b77944 Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.870999 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-njgb5"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.883453 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.894998 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv"] Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.898370 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.918882 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.939379 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.958682 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.975359 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:40 crc kubenswrapper[5130]: I1212 16:16:40.979607 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.004097 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.015651 5130 request.go:752] "Waited before sending request" delay="1.009561658s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.038913 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.041217 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.044929 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.061094 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.078146 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.098191 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.111899 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.119900 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.138397 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.158754 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.178381 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.197228 5130 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.197362 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert podName:47102097-389c-44ce-a25f-6b8d25a70e1d nodeName:}" failed. No retries permitted until 2025-12-12 16:16:41.697329837 +0000 UTC m=+101.595004669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert") pod "ingress-canary-tqcqf" (UID: "47102097-389c-44ce-a25f-6b8d25a70e1d") : failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.197643 5130 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.197691 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.197804 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images podName:1999cfc6-e5a0-4ddb-883d-71f861b286a8 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:41.697765178 +0000 UTC m=+101.595440010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images") pod "machine-config-operator-67c9d58cbb-bg744" (UID: "1999cfc6-e5a0-4ddb-883d-71f861b286a8") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.197924 5130 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.198087 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls podName:1999cfc6-e5a0-4ddb-883d-71f861b286a8 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:41.698056405 +0000 UTC m=+101.595731237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls") pod "machine-config-operator-67c9d58cbb-bg744" (UID: "1999cfc6-e5a0-4ddb-883d-71f861b286a8") : failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.198147 5130 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.198238 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key podName:6baa2db5-b688-47dd-8d81-7dadbbbd3759 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:41.698224279 +0000 UTC m=+101.595899311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key") pod "service-ca-74545575db-gsm6t" (UID: "6baa2db5-b688-47dd-8d81-7dadbbbd3759") : failed to sync secret cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.198145 5130 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: E1212 16:16:41.198767 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle podName:6baa2db5-b688-47dd-8d81-7dadbbbd3759 nodeName:}" failed. No retries permitted until 2025-12-12 16:16:41.698749602 +0000 UTC m=+101.596424594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle") pod "service-ca-74545575db-gsm6t" (UID: "6baa2db5-b688-47dd-8d81-7dadbbbd3759") : failed to sync configmap cache: timed out waiting for the condition Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.217791 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.239275 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.260768 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.279321 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.298513 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.319585 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.337757 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.357894 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.377552 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.398522 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.418508 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.438502 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.458411 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.478341 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.498322 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.517982 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.537861 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.558142 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.578290 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.598892 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.617946 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.637732 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.658756 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.678287 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.702784 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.719603 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.722916 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.722947 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.723013 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.723046 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.723290 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.723988 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1999cfc6-e5a0-4ddb-883d-71f861b286a8-images\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.724054 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-cabundle\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.728008 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6baa2db5-b688-47dd-8d81-7dadbbbd3759-signing-key\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.728317 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1999cfc6-e5a0-4ddb-883d-71f861b286a8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.738484 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.758017 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.778818 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.801104 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.819290 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.839437 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.858112 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.869631 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" event={"ID":"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8","Type":"ContainerStarted","Data":"b976a7939666668fac298521dfcb6afca5080d62bc37b2dfadd7821000443991"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.869719 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" event={"ID":"e0a1decf-4248-4f48-ba06-e9ec8fdbbea8","Type":"ContainerStarted","Data":"feea72f891d532e7dfe73c08b63e823fe57b57ede311144b739a5b42f15b970e"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.871944 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" event={"ID":"0abafdd2-351e-4f65-9dea-5578d313b760","Type":"ContainerStarted","Data":"000bafc8ec1fb37ad2abe0848706379c82cf858adf8b9716a5fb58554b737af2"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.871985 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" event={"ID":"0abafdd2-351e-4f65-9dea-5578d313b760","Type":"ContainerStarted","Data":"22d3ccec5b17469ad200471e18f8ff3e3cc9fdeda5c2de923a32d0d91dc830c8"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.871997 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" event={"ID":"0abafdd2-351e-4f65-9dea-5578d313b760","Type":"ContainerStarted","Data":"46d4077c2585bacb39949d15d21428f3e181029f32d32ad523532640b0b77944"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.874164 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" event={"ID":"6f56ef95-299c-4bae-bc46-92e9d8358097","Type":"ContainerStarted","Data":"7033ad7b0fad7fa19d6835390952646ac8833be5a99eb9dd1f8e691427043491"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.874261 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" event={"ID":"6f56ef95-299c-4bae-bc46-92e9d8358097","Type":"ContainerStarted","Data":"9b955c49e36100c59d706583adbbf2f9a64dd45c7e75c4d6332d2313100a743c"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.883006 5130 generic.go:358] "Generic (PLEG): container finished" podID="1bfafc57-4718-4d71-9f69-52b321379a27" containerID="af90cc8967a4cc08419f0684b16d689495d59b87586d96a001d4f8bcddb1fa8d" exitCode=0 Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.883118 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" event={"ID":"1bfafc57-4718-4d71-9f69-52b321379a27","Type":"ContainerDied","Data":"af90cc8967a4cc08419f0684b16d689495d59b87586d96a001d4f8bcddb1fa8d"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.883224 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" event={"ID":"1bfafc57-4718-4d71-9f69-52b321379a27","Type":"ContainerStarted","Data":"18ae7415275d436b3001de65256fb1f46f23bff514599b79402add982df4c117"} Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.886538 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.898604 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.918747 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.957392 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.978549 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 16:16:41 crc kubenswrapper[5130]: I1212 16:16:41.998248 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.017515 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.029589 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47102097-389c-44ce-a25f-6b8d25a70e1d-cert\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.036342 5130 request.go:752] "Waited before sending request" delay="1.970697974s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.054414 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48374: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.058606 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.079668 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.085095 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48388: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.098596 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.119586 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.124429 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48392: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.153292 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4bw6\" (UniqueName: \"kubernetes.io/projected/4651322b-9aec-4667-afa3-1602ad5176fe-kube-api-access-z4bw6\") pod \"console-64d44f6ddf-zhgm9\" (UID: \"4651322b-9aec-4667-afa3-1602ad5176fe\") " pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.172764 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l75mj\" (UniqueName: \"kubernetes.io/projected/d55f43e2-46df-4460-b17f-0daa75b89154-kube-api-access-l75mj\") pod \"authentication-operator-7f5c659b84-6t92c\" (UID: \"d55f43e2-46df-4460-b17f-0daa75b89154\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.175889 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48396: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.177931 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.213560 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.235632 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-csnbw\" (UniqueName: \"kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw\") pod \"collect-profiles-29425935-7hkrm\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.237004 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.240846 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48398: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.257707 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.291669 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhxj\" (UniqueName: \"kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj\") pod \"route-controller-manager-776cdc94d6-zksq4\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.316118 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22a6a238-12c9-43ae-afbc-f9595d46e727-kube-api-access\") pod \"kube-apiserver-operator-575994946d-wff8v\" (UID: \"22a6a238-12c9-43ae-afbc-f9595d46e727\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.331908 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4g5m\" (UniqueName: \"kubernetes.io/projected/8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7-kube-api-access-h4g5m\") pod \"service-ca-operator-5b9c976747-9wbcx\" (UID: \"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.340134 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48402: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.347954 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.353652 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8c4\" (UniqueName: \"kubernetes.io/projected/2a282672-c872-405b-9325-f8f48865334c-kube-api-access-rc8c4\") pod \"cluster-samples-operator-6b564684c8-fzlkp\" (UID: \"2a282672-c872-405b-9325-f8f48865334c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.369930 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.373386 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7fxv\" (UniqueName: \"kubernetes.io/projected/65efae24-6623-454c-b665-e5e407e86269-kube-api-access-v7fxv\") pod \"console-operator-67c89758df-5tw72\" (UID: \"65efae24-6623-454c-b665-e5e407e86269\") " pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.396018 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c111429-5512-4d9c-898b-d3ec0bdb5d08-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-nsdgk\" (UID: \"4c111429-5512-4d9c-898b-d3ec0bdb5d08\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.397628 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.416640 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.426078 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.435908 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.438423 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.442489 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj55g\" (UniqueName: \"kubernetes.io/projected/5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9-kube-api-access-rj55g\") pod \"cluster-image-registry-operator-86c45576b9-sfm9v\" (UID: \"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.445455 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.454424 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.458303 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.463811 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.471569 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.494758 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv2mw\" (UniqueName: \"kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw\") pod \"controller-manager-65b6cccf98-flnsl\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.511968 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8kx\" (UniqueName: \"kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx\") pod \"oauth-openshift-66458b6674-brfdj\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.531741 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.536042 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48414: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.544437 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgtw\" (UniqueName: \"kubernetes.io/projected/9cc5b0f4-dc96-4a65-8404-f3d36ad70787-kube-api-access-dlgtw\") pod \"openshift-controller-manager-operator-686468bdd5-xknw6\" (UID: \"9cc5b0f4-dc96-4a65-8404-f3d36ad70787\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.556024 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh9pn\" (UniqueName: \"kubernetes.io/projected/6baa2db5-b688-47dd-8d81-7dadbbbd3759-kube-api-access-lh9pn\") pod \"service-ca-74545575db-gsm6t\" (UID: \"6baa2db5-b688-47dd-8d81-7dadbbbd3759\") " pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.567777 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.575263 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j957x\" (UniqueName: \"kubernetes.io/projected/47102097-389c-44ce-a25f-6b8d25a70e1d-kube-api-access-j957x\") pod \"ingress-canary-tqcqf\" (UID: \"47102097-389c-44ce-a25f-6b8d25a70e1d\") " pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.589492 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.596505 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfvwr\" (UniqueName: \"kubernetes.io/projected/9c49153e-af72-4d2f-8184-fa7ba43a5a3e-kube-api-access-jfvwr\") pod \"control-plane-machine-set-operator-75ffdb6fcd-m8gw7\" (UID: \"9c49153e-af72-4d2f-8184-fa7ba43a5a3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.617448 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4xfb\" (UniqueName: \"kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb\") pod \"marketplace-operator-547dbd544d-xpvsb\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.617777 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.624231 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c"] Dec 12 16:16:42 crc kubenswrapper[5130]: W1212 16:16:42.630629 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda78c6a97_054e_484e_aae2_a33bd3bb7b40.slice/crio-fe12aa686f8f130f2ed0db07a57b150e66a6ef1f7c1242cf968402245bac1b07 WatchSource:0}: Error finding container fe12aa686f8f130f2ed0db07a57b150e66a6ef1f7c1242cf968402245bac1b07: Status 404 returned error can't find the container with id fe12aa686f8f130f2ed0db07a57b150e66a6ef1f7c1242cf968402245bac1b07 Dec 12 16:16:42 crc kubenswrapper[5130]: W1212 16:16:42.637975 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd55f43e2_46df_4460_b17f_0daa75b89154.slice/crio-5260b3857fe9178b42ba78b26a810de66780669b2c78a7cae29a736661bc1aa5 WatchSource:0}: Error finding container 5260b3857fe9178b42ba78b26a810de66780669b2c78a7cae29a736661bc1aa5: Status 404 returned error can't find the container with id 5260b3857fe9178b42ba78b26a810de66780669b2c78a7cae29a736661bc1aa5 Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.641965 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdt5\" (UniqueName: \"kubernetes.io/projected/f967d508-b683-4df4-9be0-3a7fb5afa7bb-kube-api-access-5qdt5\") pod \"downloads-747b44746d-sm46g\" (UID: \"f967d508-b683-4df4-9be0-3a7fb5afa7bb\") " pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.661025 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.664649 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzkz\" (UniqueName: \"kubernetes.io/projected/1999cfc6-e5a0-4ddb-883d-71f861b286a8-kube-api-access-dkzkz\") pod \"machine-config-operator-67c9d58cbb-bg744\" (UID: \"1999cfc6-e5a0-4ddb-883d-71f861b286a8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.675220 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xjv\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-kube-api-access-z2xjv\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.687882 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.694091 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-gsm6t" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.694983 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/338f89a1-1c2f-4e37-9572-c5b13d682ca9-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5twrv\" (UID: \"338f89a1-1c2f-4e37-9572-c5b13d682ca9\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.702751 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.718140 5130 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.719383 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-594n6\" (UniqueName: \"kubernetes.io/projected/124ec2f9-0e23-47da-b25f-66a13947465e-kube-api-access-594n6\") pod \"olm-operator-5cdf44d969-kcw92\" (UID: \"124ec2f9-0e23-47da-b25f-66a13947465e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.731886 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tqcqf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.838549 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.839933 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-metrics-certs\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.839968 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-885wm\" (UniqueName: \"kubernetes.io/projected/6e354e82-d648-4680-b0c8-e901bfcfbd5f-kube-api-access-885wm\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.839990 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c6kv\" (UniqueName: \"kubernetes.io/projected/5a94df8d-2607-41a1-b1f9-21016895dcd6-kube-api-access-6c6kv\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840009 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb351b5c-811a-4e79-ace2-5d78737aef4c-serving-cert\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840029 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1875478-2fa5-47f4-9c0a-13afc9166e8e-tmp-dir\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840048 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tf9\" (UniqueName: \"kubernetes.io/projected/e1875478-2fa5-47f4-9c0a-13afc9166e8e-kube-api-access-s9tf9\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840064 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrgd\" (UniqueName: \"kubernetes.io/projected/1a9ac0b2-cad1-44fa-993c-0ae63193f086-kube-api-access-6wrgd\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840080 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-encryption-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840108 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/097ff9f3-52cb-4063-a6a1-0c8178adccc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840130 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-node-pullsecrets\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840159 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-audit\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840206 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840242 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840281 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-webhook-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840302 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-audit-dir\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840323 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-client\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840347 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47f6\" (UniqueName: \"kubernetes.io/projected/00c7f3b3-f4dd-4d19-9739-512a35f436f5-kube-api-access-h47f6\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840371 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840404 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840424 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840457 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840475 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-apiservice-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840532 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-default-certificate\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840567 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a9ac0b2-cad1-44fa-993c-0ae63193f086-service-ca-bundle\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840585 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dp4\" (UniqueName: \"kubernetes.io/projected/693e66ed-f826-4819-a47d-f32faf9dab96-kube-api-access-w2dp4\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840609 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1875478-2fa5-47f4-9c0a-13afc9166e8e-metrics-tls\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840651 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-stats-auth\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840669 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hhb\" (UniqueName: \"kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840686 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/eb351b5c-811a-4e79-ace2-5d78737aef4c-available-featuregates\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840739 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-image-import-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840782 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-serving-cert\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840818 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvtf8\" (UniqueName: \"kubernetes.io/projected/eb351b5c-811a-4e79-ace2-5d78737aef4c-kube-api-access-tvtf8\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840857 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840872 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8889\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840899 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840929 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqmr\" (UniqueName: \"kubernetes.io/projected/097ff9f3-52cb-4063-a6a1-0c8178adccc9-kube-api-access-qjqmr\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.840969 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-service-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.841005 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.841042 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a94df8d-2607-41a1-b1f9-21016895dcd6-tmpfs\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.841098 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.841121 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.841142 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcll\" (UniqueName: \"kubernetes.io/projected/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-kube-api-access-zlcll\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.846564 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: E1212 16:16:42.847074 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.347054544 +0000 UTC m=+103.244729366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.847174 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c7f3b3-f4dd-4d19-9739-512a35f436f5-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.847289 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxx89\" (UniqueName: \"kubernetes.io/projected/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-kube-api-access-wxx89\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.847338 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.847357 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.847389 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e354e82-d648-4680-b0c8-e901bfcfbd5f-tmpfs\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848146 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/097ff9f3-52cb-4063-a6a1-0c8178adccc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848197 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842m4\" (UniqueName: \"kubernetes.io/projected/be106c32-9849-49fd-9e4a-4b5b9c16920a-kube-api-access-842m4\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848219 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-serving-cert\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848237 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848869 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfw7\" (UniqueName: \"kubernetes.io/projected/2403b973-68b3-4a15-a444-7e271aea91c1-kube-api-access-4dfw7\") pod \"migrator-866fcbc849-6mhsj\" (UID: \"2403b973-68b3-4a15-a444-7e271aea91c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.848966 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849006 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-config\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849159 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849354 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849389 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849407 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be106c32-9849-49fd-9e4a-4b5b9c16920a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.849486 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-client\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.850026 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-tmp-dir\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.850082 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.850107 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.850130 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-srv-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.861580 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.878399 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.888595 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48422: no serving certificate available for the kubelet" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.905612 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" event={"ID":"1bfafc57-4718-4d71-9f69-52b321379a27","Type":"ContainerStarted","Data":"3b44176be13059d76a31fc5227838299c5498420cfa28047c386bb82da1f040c"} Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.907496 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" event={"ID":"a78c6a97-054e-484e-aae2-a33bd3bb7b40","Type":"ContainerStarted","Data":"5afed13e7cab1d026459fabea793580ca962d81aa42c1db7a9cb82b49da4a6ad"} Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.907528 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" event={"ID":"a78c6a97-054e-484e-aae2-a33bd3bb7b40","Type":"ContainerStarted","Data":"fe12aa686f8f130f2ed0db07a57b150e66a6ef1f7c1242cf968402245bac1b07"} Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.909812 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.910083 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.910739 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" event={"ID":"d55f43e2-46df-4460-b17f-0daa75b89154","Type":"ContainerStarted","Data":"5260b3857fe9178b42ba78b26a810de66780669b2c78a7cae29a736661bc1aa5"} Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.922163 5130 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-zksq4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.922286 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951300 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951606 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951647 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8889\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951672 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951708 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqmr\" (UniqueName: \"kubernetes.io/projected/097ff9f3-52cb-4063-a6a1-0c8178adccc9-kube-api-access-qjqmr\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951748 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-service-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951809 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951841 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-socket-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951869 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a94df8d-2607-41a1-b1f9-21016895dcd6-tmpfs\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951892 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-csi-data-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951955 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.951982 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcll\" (UniqueName: \"kubernetes.io/projected/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-kube-api-access-zlcll\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952024 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952069 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c7f3b3-f4dd-4d19-9739-512a35f436f5-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952095 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxx89\" (UniqueName: \"kubernetes.io/projected/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-kube-api-access-wxx89\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952130 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952148 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952195 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e354e82-d648-4680-b0c8-e901bfcfbd5f-tmpfs\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952276 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/097ff9f3-52cb-4063-a6a1-0c8178adccc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952301 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-842m4\" (UniqueName: \"kubernetes.io/projected/be106c32-9849-49fd-9e4a-4b5b9c16920a-kube-api-access-842m4\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952321 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-serving-cert\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952345 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.952399 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfw7\" (UniqueName: \"kubernetes.io/projected/2403b973-68b3-4a15-a444-7e271aea91c1-kube-api-access-4dfw7\") pod \"migrator-866fcbc849-6mhsj\" (UID: \"2403b973-68b3-4a15-a444-7e271aea91c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.957968 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.958467 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a94df8d-2607-41a1-b1f9-21016895dcd6-tmpfs\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.958832 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-service-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: E1212 16:16:42.959400 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.459366596 +0000 UTC m=+103.357041428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.960474 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.960766 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6e354e82-d648-4680-b0c8-e901bfcfbd5f-tmpfs\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.961226 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.962598 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.962914 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-config\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.962953 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.963132 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.963513 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-plugins-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.963709 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9kv\" (UniqueName: \"kubernetes.io/projected/62e07220-a49a-4989-8f0a-7eb7daf6fc61-kube-api-access-vf9kv\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.964822 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.964936 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965016 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be106c32-9849-49fd-9e4a-4b5b9c16920a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965111 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc06dad-6486-4dd5-9456-40ce964abc7f-config-volume\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965316 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-client\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965400 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-certs\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965517 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-tmp-dir\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965649 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.965765 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.966353 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-srv-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.966793 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-serving-cert\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.967331 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-config\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.968147 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/097ff9f3-52cb-4063-a6a1-0c8178adccc9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.967756 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-node-bootstrap-token\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970453 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-metrics-certs\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970504 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-885wm\" (UniqueName: \"kubernetes.io/projected/6e354e82-d648-4680-b0c8-e901bfcfbd5f-kube-api-access-885wm\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970543 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6c6kv\" (UniqueName: \"kubernetes.io/projected/5a94df8d-2607-41a1-b1f9-21016895dcd6-kube-api-access-6c6kv\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970700 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb351b5c-811a-4e79-ace2-5d78737aef4c-serving-cert\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970795 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1875478-2fa5-47f4-9c0a-13afc9166e8e-tmp-dir\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970944 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tf9\" (UniqueName: \"kubernetes.io/projected/e1875478-2fa5-47f4-9c0a-13afc9166e8e-kube-api-access-s9tf9\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.970980 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrgd\" (UniqueName: \"kubernetes.io/projected/1a9ac0b2-cad1-44fa-993c-0ae63193f086-kube-api-access-6wrgd\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971007 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-encryption-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971037 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/097ff9f3-52cb-4063-a6a1-0c8178adccc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971065 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-node-pullsecrets\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971098 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-audit\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971145 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971196 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-registration-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971226 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971296 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-webhook-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971330 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-audit-dir\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971357 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-client\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971386 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h47f6\" (UniqueName: \"kubernetes.io/projected/00c7f3b3-f4dd-4d19-9739-512a35f436f5-kube-api-access-h47f6\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971431 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971492 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971542 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9dc06dad-6486-4dd5-9456-40ce964abc7f-metrics-tls\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971573 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9rw\" (UniqueName: \"kubernetes.io/projected/9dc06dad-6486-4dd5-9456-40ce964abc7f-kube-api-access-bq9rw\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971602 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971634 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971657 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-apiservice-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971689 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-default-certificate\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971712 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9dc06dad-6486-4dd5-9456-40ce964abc7f-tmp-dir\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971794 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbnm\" (UniqueName: \"kubernetes.io/projected/e0adb788-edae-4099-900e-8af998a81f87-kube-api-access-mlbnm\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971858 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a9ac0b2-cad1-44fa-993c-0ae63193f086-service-ca-bundle\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971882 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dp4\" (UniqueName: \"kubernetes.io/projected/693e66ed-f826-4819-a47d-f32faf9dab96-kube-api-access-w2dp4\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971949 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1875478-2fa5-47f4-9c0a-13afc9166e8e-metrics-tls\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971984 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-mountpoint-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.971983 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972016 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-stats-auth\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972415 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-tmp-dir\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972437 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972655 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7hhb\" (UniqueName: \"kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.969365 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c7f3b3-f4dd-4d19-9739-512a35f436f5-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972749 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-ca\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972851 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/eb351b5c-811a-4e79-ace2-5d78737aef4c-available-featuregates\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972988 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-image-import-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.973142 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-config\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.973595 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/eb351b5c-811a-4e79-ace2-5d78737aef4c-available-featuregates\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.972699 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.973801 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-node-pullsecrets\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.974036 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/693e66ed-f826-4819-a47d-f32faf9dab96-audit-dir\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.974448 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.974477 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e1875478-2fa5-47f4-9c0a-13afc9166e8e-tmp-dir\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.974559 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-audit\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.974893 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-client\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.975410 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/097ff9f3-52cb-4063-a6a1-0c8178adccc9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.975471 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.976471 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.977400 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/693e66ed-f826-4819-a47d-f32faf9dab96-image-import-ca\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.977486 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.977704 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-serving-cert\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.977848 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tvtf8\" (UniqueName: \"kubernetes.io/projected/eb351b5c-811a-4e79-ace2-5d78737aef4c-kube-api-access-tvtf8\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.978067 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a9ac0b2-cad1-44fa-993c-0ae63193f086-service-ca-bundle\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.978201 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.982155 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.982612 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.982878 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.983435 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.983695 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1875478-2fa5-47f4-9c0a-13afc9166e8e-metrics-tls\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.985412 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-default-certificate\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.986401 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a94df8d-2607-41a1-b1f9-21016895dcd6-srv-cert\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.992520 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-etcd-client\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.992976 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/be106c32-9849-49fd-9e4a-4b5b9c16920a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.993551 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60d98f7f-99e4-4bb4-a7b6-48de2ff6071c-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dcs9d\" (UID: \"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.994793 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-webhook-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.996006 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e354e82-d648-4680-b0c8-e901bfcfbd5f-apiservice-cert\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.996546 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-serving-cert\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.996734 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-metrics-certs\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.996749 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb351b5c-811a-4e79-ace2-5d78737aef4c-serving-cert\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.997438 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8889\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.997577 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/693e66ed-f826-4819-a47d-f32faf9dab96-encryption-config\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:42 crc kubenswrapper[5130]: I1212 16:16:42.998797 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1a9ac0b2-cad1-44fa-993c-0ae63193f086-stats-auth\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.020410 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxx89\" (UniqueName: \"kubernetes.io/projected/a6c070b2-83ee-4c73-9201-3ab5dcc9aeca-kube-api-access-wxx89\") pod \"etcd-operator-69b85846b6-mrrt5\" (UID: \"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.043579 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqmr\" (UniqueName: \"kubernetes.io/projected/097ff9f3-52cb-4063-a6a1-0c8178adccc9-kube-api-access-qjqmr\") pod \"machine-config-controller-f9cdd68f7-ndnxt\" (UID: \"097ff9f3-52cb-4063-a6a1-0c8178adccc9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.063606 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.074410 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcll\" (UniqueName: \"kubernetes.io/projected/dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7-kube-api-access-zlcll\") pod \"kube-storage-version-migrator-operator-565b79b866-krgxf\" (UID: \"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.082423 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9dc06dad-6486-4dd5-9456-40ce964abc7f-tmp-dir\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.082486 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbnm\" (UniqueName: \"kubernetes.io/projected/e0adb788-edae-4099-900e-8af998a81f87-kube-api-access-mlbnm\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.082529 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-mountpoint-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.082708 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-socket-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.082738 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-csi-data-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.083245 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-socket-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.083660 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.083990 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-plugins-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084014 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vf9kv\" (UniqueName: \"kubernetes.io/projected/62e07220-a49a-4989-8f0a-7eb7daf6fc61-kube-api-access-vf9kv\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084043 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc06dad-6486-4dd5-9456-40ce964abc7f-config-volume\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084104 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-certs\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084143 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-node-bootstrap-token\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084272 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-registration-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084348 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084415 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9dc06dad-6486-4dd5-9456-40ce964abc7f-metrics-tls\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084440 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9rw\" (UniqueName: \"kubernetes.io/projected/9dc06dad-6486-4dd5-9456-40ce964abc7f-kube-api-access-bq9rw\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084577 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-csi-data-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084767 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-plugins-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.084912 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9dc06dad-6486-4dd5-9456-40ce964abc7f-tmp-dir\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.084961 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.584943121 +0000 UTC m=+103.482617943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.085537 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc06dad-6486-4dd5-9456-40ce964abc7f-config-volume\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.085605 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-registration-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.086099 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e0adb788-edae-4099-900e-8af998a81f87-mountpoint-dir\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.091612 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-842m4\" (UniqueName: \"kubernetes.io/projected/be106c32-9849-49fd-9e4a-4b5b9c16920a-kube-api-access-842m4\") pod \"multus-admission-controller-69db94689b-xks9x\" (UID: \"be106c32-9849-49fd-9e4a-4b5b9c16920a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.092247 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9dc06dad-6486-4dd5-9456-40ce964abc7f-metrics-tls\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.092394 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-node-bootstrap-token\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.092553 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/62e07220-a49a-4989-8f0a-7eb7daf6fc61-certs\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.094461 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.097007 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-zhgm9"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.105722 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfw7\" (UniqueName: \"kubernetes.io/projected/2403b973-68b3-4a15-a444-7e271aea91c1-kube-api-access-4dfw7\") pod \"migrator-866fcbc849-6mhsj\" (UID: \"2403b973-68b3-4a15-a444-7e271aea91c1\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.150480 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.152060 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.172196 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrgd\" (UniqueName: \"kubernetes.io/projected/1a9ac0b2-cad1-44fa-993c-0ae63193f086-kube-api-access-6wrgd\") pod \"router-default-68cf44c8b8-bqttx\" (UID: \"1a9ac0b2-cad1-44fa-993c-0ae63193f086\") " pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.194917 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.197085 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.697054519 +0000 UTC m=+103.594729351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.200016 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-885wm\" (UniqueName: \"kubernetes.io/projected/6e354e82-d648-4680-b0c8-e901bfcfbd5f-kube-api-access-885wm\") pod \"packageserver-7d4fc7d867-lfwgk\" (UID: \"6e354e82-d648-4680-b0c8-e901bfcfbd5f\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.238799 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c6kv\" (UniqueName: \"kubernetes.io/projected/5a94df8d-2607-41a1-b1f9-21016895dcd6-kube-api-access-6c6kv\") pod \"catalog-operator-75ff9f647d-4v9cj\" (UID: \"5a94df8d-2607-41a1-b1f9-21016895dcd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.239895 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tf9\" (UniqueName: \"kubernetes.io/projected/e1875478-2fa5-47f4-9c0a-13afc9166e8e-kube-api-access-s9tf9\") pod \"dns-operator-799b87ffcd-2w9hn\" (UID: \"e1875478-2fa5-47f4-9c0a-13afc9166e8e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.242618 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.250007 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.251009 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7hhb\" (UniqueName: \"kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb\") pod \"cni-sysctl-allowlist-ds-q8kdt\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.258575 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.266886 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.274721 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dp4\" (UniqueName: \"kubernetes.io/projected/693e66ed-f826-4819-a47d-f32faf9dab96-kube-api-access-w2dp4\") pod \"apiserver-9ddfb9f55-sg8rq\" (UID: \"693e66ed-f826-4819-a47d-f32faf9dab96\") " pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.276403 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.286126 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.302950 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.303283 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.803267312 +0000 UTC m=+103.700942144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.311645 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.315318 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h47f6\" (UniqueName: \"kubernetes.io/projected/00c7f3b3-f4dd-4d19-9739-512a35f436f5-kube-api-access-h47f6\") pod \"package-server-manager-77f986bd66-mjzlp\" (UID: \"00c7f3b3-f4dd-4d19-9739-512a35f436f5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.316520 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5tw72"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.317166 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.322203 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf9kv\" (UniqueName: \"kubernetes.io/projected/62e07220-a49a-4989-8f0a-7eb7daf6fc61-kube-api-access-vf9kv\") pod \"machine-config-server-nwxp2\" (UID: \"62e07220-a49a-4989-8f0a-7eb7daf6fc61\") " pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.322560 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.335750 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbnm\" (UniqueName: \"kubernetes.io/projected/e0adb788-edae-4099-900e-8af998a81f87-kube-api-access-mlbnm\") pod \"csi-hostpathplugin-59hhc\" (UID: \"e0adb788-edae-4099-900e-8af998a81f87\") " pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.340851 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvtf8\" (UniqueName: \"kubernetes.io/projected/eb351b5c-811a-4e79-ace2-5d78737aef4c-kube-api-access-tvtf8\") pod \"openshift-config-operator-5777786469-49zmj\" (UID: \"eb351b5c-811a-4e79-ace2-5d78737aef4c\") " pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.342832 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.360550 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nwxp2" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.366097 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9rw\" (UniqueName: \"kubernetes.io/projected/9dc06dad-6486-4dd5-9456-40ce964abc7f-kube-api-access-bq9rw\") pod \"dns-default-rl44g\" (UID: \"9dc06dad-6486-4dd5-9456-40ce964abc7f\") " pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.384594 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.405856 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.406296 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.906269447 +0000 UTC m=+103.803944269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.407517 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.408372 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:43.908360508 +0000 UTC m=+103.806035340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.416512 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.423336 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.471991 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.484534 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.493120 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.501586 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.510558 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.510741 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.010703246 +0000 UTC m=+103.908378078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.511346 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.511808 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.011801353 +0000 UTC m=+103.909476185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.514377 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-gsm6t"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.582429 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" podStartSLOduration=83.582415677 podStartE2EDuration="1m23.582415677s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:43.579774832 +0000 UTC m=+103.477449664" watchObservedRunningTime="2025-12-12 16:16:43.582415677 +0000 UTC m=+103.480090499" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.584591 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744"] Dec 12 16:16:43 crc kubenswrapper[5130]: W1212 16:16:43.594502 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b00dfbb_ff49_4fb2_bf80_0ad5f48198f7.slice/crio-cdd4b26e97241fbc52121884f8e472181831c434a65673b7d8859d2c2b10af54 WatchSource:0}: Error finding container cdd4b26e97241fbc52121884f8e472181831c434a65673b7d8859d2c2b10af54: Status 404 returned error can't find the container with id cdd4b26e97241fbc52121884f8e472181831c434a65673b7d8859d2c2b10af54 Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.598897 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48428: no serving certificate available for the kubelet" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.615769 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.616089 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.116071639 +0000 UTC m=+104.013746471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.618221 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.630366 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.655452 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.688655 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tqcqf"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.724393 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.724903 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.224876715 +0000 UTC m=+104.122551547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.759322 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xks9x"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.790793 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zf8cv" podStartSLOduration=84.790767694 podStartE2EDuration="1m24.790767694s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:43.773934933 +0000 UTC m=+103.671609755" watchObservedRunningTime="2025-12-12 16:16:43.790767694 +0000 UTC m=+103.688442526" Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.793830 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.811534 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.825902 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.826770 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.326714481 +0000 UTC m=+104.224389313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.831942 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-brfdj"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.832017 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.876226 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.909918 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-sm46g"] Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.933138 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:43 crc kubenswrapper[5130]: E1212 16:16:43.933651 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.433630992 +0000 UTC m=+104.331305824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.937659 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nwxp2" event={"ID":"62e07220-a49a-4989-8f0a-7eb7daf6fc61","Type":"ContainerStarted","Data":"ad94e9a5161eea124fd6a51943f5f56a22a137c00f343ff3b0093e1bc18ac725"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.952707 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" event={"ID":"9cc5b0f4-dc96-4a65-8404-f3d36ad70787","Type":"ContainerStarted","Data":"2e91da059032f204e0635056eca162922cc8d96e36eddcee276e26db40504fa7"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.965137 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gsm6t" event={"ID":"6baa2db5-b688-47dd-8d81-7dadbbbd3759","Type":"ContainerStarted","Data":"8d1ab54b80fb5cd41339903f0b79bcc9051bd0ddd510fc12c03b27b312424770"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.972861 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" event={"ID":"4c111429-5512-4d9c-898b-d3ec0bdb5d08","Type":"ContainerStarted","Data":"7fc07748f28ad23d569f851a2e2338c4bb871689212066814cea4580cd9faf67"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.986908 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" event={"ID":"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9","Type":"ContainerStarted","Data":"a586e36317c1dad58a3f250eb491cacfc3c9a9f2c0593e3b418803d4fd07f2f5"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.991375 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5tw72" event={"ID":"65efae24-6623-454c-b665-e5e407e86269","Type":"ContainerStarted","Data":"e5e60228e9d988aefb88921e0968711cfd881db58c97c8a8c5b23da573180a35"} Dec 12 16:16:43 crc kubenswrapper[5130]: I1212 16:16:43.997714 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" event={"ID":"d55f43e2-46df-4460-b17f-0daa75b89154","Type":"ContainerStarted","Data":"e4d4ae49d25ec429403ecf58784fd13b1375221a9514cf65b29a19228fdc252e"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.015948 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" event={"ID":"1999cfc6-e5a0-4ddb-883d-71f861b286a8","Type":"ContainerStarted","Data":"d2cde16dded540f6a71d463b56d9b3fbe9cdcaa1c96d46e5cd1e32779c9eb5af"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.017606 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" event={"ID":"d259a06e-3949-41b6-a067-7c01441da4b1","Type":"ContainerStarted","Data":"2bf714089818fd6477a262dc7b43a76fa700b53d570bf643af2f365afa9909f2"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.034001 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.036853 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.535300204 +0000 UTC m=+104.432975036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.044982 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" event={"ID":"19e81fea-065e-43b5-8e56-49bfcfa342f7","Type":"ContainerStarted","Data":"328df9b4f48f0adc7c6483781e32bef2bbf38c7a3bc72162f9752fc54e642716"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.047807 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5"] Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.048114 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe106c32_9849_49fd_9e4a_4b5b9c16920a.slice/crio-c49e08c7880c95445bad61bbd80bc3fa3e3679d96d95853d151678fb7b42e5db WatchSource:0}: Error finding container c49e08c7880c95445bad61bbd80bc3fa3e3679d96d95853d151678fb7b42e5db: Status 404 returned error can't find the container with id c49e08c7880c95445bad61bbd80bc3fa3e3679d96d95853d151678fb7b42e5db Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.054059 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" event={"ID":"22a6a238-12c9-43ae-afbc-f9595d46e727","Type":"ContainerStarted","Data":"755f26522b7a517535508aa0c7585634c4261c35ec0bddd08f3d85a3886e6e64"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.056777 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" event={"ID":"2a282672-c872-405b-9325-f8f48865334c","Type":"ContainerStarted","Data":"d25a5167e83c106fd6aae82bd4f1881d7b1012c90d8673c0eb50d806ecfe8a9d"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.063507 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" event={"ID":"d943d968-b5e5-4d94-8fc7-8ba0013e5d76","Type":"ContainerStarted","Data":"f9dd92ceda1a3912c46704c32056af091c91e3402addb86408de2701845d893b"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.077251 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.084694 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" event={"ID":"1a9ac0b2-cad1-44fa-993c-0ae63193f086","Type":"ContainerStarted","Data":"a1668d3df1c01d8fd70c92da65b94295073bd3785e68bb2eea0502102450fed0"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.088998 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tqcqf" event={"ID":"47102097-389c-44ce-a25f-6b8d25a70e1d","Type":"ContainerStarted","Data":"7de9f74eaf8718433f9098110c5587849d2104fa8fe5544832d4f1d0b4185212"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.105892 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" event={"ID":"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7","Type":"ContainerStarted","Data":"cdd4b26e97241fbc52121884f8e472181831c434a65673b7d8859d2c2b10af54"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.128586 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-zhgm9" event={"ID":"4651322b-9aec-4667-afa3-1602ad5176fe","Type":"ContainerStarted","Data":"ba64be3417dc3e71455104ebaddc799f261befb18354e094ff458a1acab48ce1"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.128652 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-zhgm9" event={"ID":"4651322b-9aec-4667-afa3-1602ad5176fe","Type":"ContainerStarted","Data":"33a249d7e78465e9a718be39e7a906df97782cbb66486425e35a61af822326a2"} Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.135665 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.135790 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.136278 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.636256719 +0000 UTC m=+104.533931551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.218464 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2w9hn"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.237081 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.237841 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.737816708 +0000 UTC m=+104.635491540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.244576 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.261398 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-dmjfw" podStartSLOduration=84.261371883 podStartE2EDuration="1m24.261371883s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:44.259554519 +0000 UTC m=+104.157229351" watchObservedRunningTime="2025-12-12 16:16:44.261371883 +0000 UTC m=+104.159046715" Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.338956 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.341351 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.841330086 +0000 UTC m=+104.739004918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.364477 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.366621 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk"] Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.416467 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e354e82_d648_4680_b0c8_e901bfcfbd5f.slice/crio-1b8886f3552073404488fbfde536fb57d7c18b1397ebdad914354193a33ea0ce WatchSource:0}: Error finding container 1b8886f3552073404488fbfde536fb57d7c18b1397ebdad914354193a33ea0ce: Status 404 returned error can't find the container with id 1b8886f3552073404488fbfde536fb57d7c18b1397ebdad914354193a33ea0ce Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.442304 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.442636 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.942601508 +0000 UTC m=+104.840276340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.446758 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.450359 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:44.950344137 +0000 UTC m=+104.848018969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.480062 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-49zmj"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.516195 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.551816 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.552296 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.052169223 +0000 UTC m=+104.949844055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.567241 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d98f7f_99e4_4bb4_a7b6_48de2ff6071c.slice/crio-60ac5b8dfa3a85aae95f0de2721afb6ca7a3cce575e7ecfe2560293af9d7574f WatchSource:0}: Error finding container 60ac5b8dfa3a85aae95f0de2721afb6ca7a3cce575e7ecfe2560293af9d7574f: Status 404 returned error can't find the container with id 60ac5b8dfa3a85aae95f0de2721afb6ca7a3cce575e7ecfe2560293af9d7574f Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.594144 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-59hhc"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.596919 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.600939 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rl44g"] Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.607517 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc06dad_6486_4dd5_9456_40ce964abc7f.slice/crio-6a1b62d609b3e5420485ba99a5b2f09e53a6d758231e9a8c18d91b6f411c606a WatchSource:0}: Error finding container 6a1b62d609b3e5420485ba99a5b2f09e53a6d758231e9a8c18d91b6f411c606a: Status 404 returned error can't find the container with id 6a1b62d609b3e5420485ba99a5b2f09e53a6d758231e9a8c18d91b6f411c606a Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.613368 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-62rws" podStartSLOduration=85.613340767 podStartE2EDuration="1m25.613340767s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:44.609202166 +0000 UTC m=+104.506876998" watchObservedRunningTime="2025-12-12 16:16:44.613340767 +0000 UTC m=+104.511015599" Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.634689 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp"] Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.654024 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.654503 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.154483661 +0000 UTC m=+105.052158493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.655089 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-sg8rq"] Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.697400 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb351b5c_811a_4e79_ace2_5d78737aef4c.slice/crio-9151190c168a5118b524699b9ef9f7265e6266487898dc6b740d348f2d538032 WatchSource:0}: Error finding container 9151190c168a5118b524699b9ef9f7265e6266487898dc6b740d348f2d538032: Status 404 returned error can't find the container with id 9151190c168a5118b524699b9ef9f7265e6266487898dc6b740d348f2d538032 Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.759668 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.760112 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.260088649 +0000 UTC m=+105.157763481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: W1212 16:16:44.820255 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2403b973_68b3_4a15_a444_7e271aea91c1.slice/crio-22cad0b592117b6ba03b75b60b8f5302b6ad18f85483bc9c43e7174f7395c192 WatchSource:0}: Error finding container 22cad0b592117b6ba03b75b60b8f5302b6ad18f85483bc9c43e7174f7395c192: Status 404 returned error can't find the container with id 22cad0b592117b6ba03b75b60b8f5302b6ad18f85483bc9c43e7174f7395c192 Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.857695 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" podStartSLOduration=84.857677142 podStartE2EDuration="1m24.857677142s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:44.85349827 +0000 UTC m=+104.751173112" watchObservedRunningTime="2025-12-12 16:16:44.857677142 +0000 UTC m=+104.755351974" Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.863506 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.863957 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.363943465 +0000 UTC m=+105.261618297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.954533 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48444: no serving certificate available for the kubelet" Dec 12 16:16:44 crc kubenswrapper[5130]: I1212 16:16:44.964738 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:44 crc kubenswrapper[5130]: E1212 16:16:44.965268 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.465250848 +0000 UTC m=+105.362925680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.069829 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.070717 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.570697993 +0000 UTC m=+105.468372825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.172607 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.172913 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.672885887 +0000 UTC m=+105.570560719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.173028 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.173577 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.673570084 +0000 UTC m=+105.571244916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.187519 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" podStartSLOduration=86.187488574 podStartE2EDuration="1m26.187488574s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.149578068 +0000 UTC m=+105.047252900" watchObservedRunningTime="2025-12-12 16:16:45.187488574 +0000 UTC m=+105.085163406" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.193485 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-zhgm9" podStartSLOduration=86.1934565 podStartE2EDuration="1m26.1934565s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.188635762 +0000 UTC m=+105.086310594" watchObservedRunningTime="2025-12-12 16:16:45.1934565 +0000 UTC m=+105.091131332" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.210406 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" event={"ID":"e13eeec0-72dd-418b-9180-87ca0d56870d","Type":"ContainerStarted","Data":"63d4f7893d2a6e51680e692730931a8e2db49032b3b5feb5b320f7d42af3e4ba"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.248454 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" event={"ID":"8b00dfbb-ff49-4fb2-bf80-0ad5f48198f7","Type":"ContainerStarted","Data":"a739c13f9e364ffc223763bd53faadf9777c076b6d016585aec6ac0cca5d6388"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.276576 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl44g" event={"ID":"9dc06dad-6486-4dd5-9456-40ce964abc7f","Type":"ContainerStarted","Data":"6a1b62d609b3e5420485ba99a5b2f09e53a6d758231e9a8c18d91b6f411c606a"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.277212 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.278636 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.778613529 +0000 UTC m=+105.676288361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.282562 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" event={"ID":"097ff9f3-52cb-4063-a6a1-0c8178adccc9","Type":"ContainerStarted","Data":"e2067efe5898138e478d288454b72bfd1c053e7c41955ea58a683e8f80ed626f"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.301301 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9wbcx" podStartSLOduration=85.301268362 podStartE2EDuration="1m25.301268362s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.298902524 +0000 UTC m=+105.196577366" watchObservedRunningTime="2025-12-12 16:16:45.301268362 +0000 UTC m=+105.198943194" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.335937 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" event={"ID":"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7","Type":"ContainerStarted","Data":"d0489ccdfa99a6b99c7ac3a0e870ca5488db2ca428180aa64741de104ff8555e"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.366636 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nwxp2" event={"ID":"62e07220-a49a-4989-8f0a-7eb7daf6fc61","Type":"ContainerStarted","Data":"70f934bf127edaac4b641cbcbe3c0eb6ae8d43a584aa901b81a63bb273f40df7"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.378410 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.378891 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.878873077 +0000 UTC m=+105.776547909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.394229 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5tw72" event={"ID":"65efae24-6623-454c-b665-e5e407e86269","Type":"ContainerStarted","Data":"a6ec95b298f638fef26b6f6443b907d0afe121532781e888fb2b6993da2bc524"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.394915 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nwxp2" podStartSLOduration=6.394894518 podStartE2EDuration="6.394894518s" podCreationTimestamp="2025-12-12 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.394107298 +0000 UTC m=+105.291782130" watchObservedRunningTime="2025-12-12 16:16:45.394894518 +0000 UTC m=+105.292569360" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.394987 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.417525 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-5tw72" podStartSLOduration=86.41749727 podStartE2EDuration="1m26.41749727s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.416728761 +0000 UTC m=+105.314403603" watchObservedRunningTime="2025-12-12 16:16:45.41749727 +0000 UTC m=+105.315172102" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.422774 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" event={"ID":"2403b973-68b3-4a15-a444-7e271aea91c1","Type":"ContainerStarted","Data":"22cad0b592117b6ba03b75b60b8f5302b6ad18f85483bc9c43e7174f7395c192"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.482707 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.482914 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.982871776 +0000 UTC m=+105.880546608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.483588 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.484763 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:45.984737121 +0000 UTC m=+105.882411983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.488513 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" event={"ID":"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c","Type":"ContainerStarted","Data":"60ac5b8dfa3a85aae95f0de2721afb6ca7a3cce575e7ecfe2560293af9d7574f"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.542287 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.542360 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.550405 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" event={"ID":"be106c32-9849-49fd-9e4a-4b5b9c16920a","Type":"ContainerStarted","Data":"c49e08c7880c95445bad61bbd80bc3fa3e3679d96d95853d151678fb7b42e5db"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.559701 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-gsm6t" event={"ID":"6baa2db5-b688-47dd-8d81-7dadbbbd3759","Type":"ContainerStarted","Data":"7996dbf716f45c63c2e2dfef679c8cc3d3da201e478311c38f249bdc7443ee15"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.579053 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" event={"ID":"4c111429-5512-4d9c-898b-d3ec0bdb5d08","Type":"ContainerStarted","Data":"2abe36fa1f639c1a9cdf40d16cf777d89d1db2ac9bb964bef0b49d5416d0e3f6"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.591873 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.593540 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.093516637 +0000 UTC m=+105.991191469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.596782 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.606691 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-gsm6t" podStartSLOduration=85.606669008 podStartE2EDuration="1m25.606669008s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.604749301 +0000 UTC m=+105.502424143" watchObservedRunningTime="2025-12-12 16:16:45.606669008 +0000 UTC m=+105.504343830" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.622587 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" event={"ID":"5274eff7-dc1d-4efb-aee0-4ab77a1dd3d9","Type":"ContainerStarted","Data":"d90685461a35cb32e8ebea715a20225a5193007e26bd32c28756a68c8feb413c"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.660801 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-nsdgk" podStartSLOduration=85.660771789 podStartE2EDuration="1m25.660771789s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.660620295 +0000 UTC m=+105.558295127" watchObservedRunningTime="2025-12-12 16:16:45.660771789 +0000 UTC m=+105.558446621" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.661861 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" event={"ID":"00c7f3b3-f4dd-4d19-9739-512a35f436f5","Type":"ContainerStarted","Data":"b300a9516c529128a4338eca3e85a4bc7c3e16956b5d84d622809c31a00f651b"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.668743 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" event={"ID":"eb351b5c-811a-4e79-ace2-5d78737aef4c","Type":"ContainerStarted","Data":"9151190c168a5118b524699b9ef9f7265e6266487898dc6b740d348f2d538032"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.690157 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerStarted","Data":"f6800f29ce6dfd01bbd7f9c0b999d8c7c936dd1b2b43419d7987576203561f95"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.690253 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerStarted","Data":"80eb6b504f4b6d215a1fdd56503837348aea4e832c71cca42b9c33074674fdba"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.691063 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.730841 5130 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-xpvsb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.730924 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.745363 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.748051 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.248031959 +0000 UTC m=+106.145706791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.786723 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" podStartSLOduration=85.786692152 podStartE2EDuration="1m25.786692152s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.763666511 +0000 UTC m=+105.661341343" watchObservedRunningTime="2025-12-12 16:16:45.786692152 +0000 UTC m=+105.684366984" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.813143 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sfm9v" podStartSLOduration=86.813124208 podStartE2EDuration="1m26.813124208s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:45.811908948 +0000 UTC m=+105.709583780" watchObservedRunningTime="2025-12-12 16:16:45.813124208 +0000 UTC m=+105.710799040" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.825621 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" event={"ID":"1999cfc6-e5a0-4ddb-883d-71f861b286a8","Type":"ContainerStarted","Data":"f84a80e23634bcf57ca6147f9cc8649f8743f5b833b2b78defd4dd65ef4051eb"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.848930 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.850935 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.35089955 +0000 UTC m=+106.248574382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.857942 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" event={"ID":"e1875478-2fa5-47f4-9c0a-13afc9166e8e","Type":"ContainerStarted","Data":"09a0809ecd406bcde0a1ea4cede12fbba5d473a8969cda641566b6406f205a3a"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.898419 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" event={"ID":"e0adb788-edae-4099-900e-8af998a81f87","Type":"ContainerStarted","Data":"7d7a7ab3b09b90e07fa8f45335ab5037ab6ebd27daf1316e35a98496a30a938f"} Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.899478 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-5tw72" Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.951092 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:45 crc kubenswrapper[5130]: E1212 16:16:45.953654 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.453633078 +0000 UTC m=+106.351307910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:45 crc kubenswrapper[5130]: I1212 16:16:45.973396 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" event={"ID":"2a282672-c872-405b-9325-f8f48865334c","Type":"ContainerStarted","Data":"6073232fe3793effd598ed66db7fc0cc2808f21153834200fa9496d9db1d7ff6"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.021255 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" event={"ID":"1a9ac0b2-cad1-44fa-993c-0ae63193f086","Type":"ContainerStarted","Data":"1152ee6062767664956ca2a0cc0a59a1153a7ab00e8fc53213023bd0e1cc514b"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.061768 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.062088 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.562069865 +0000 UTC m=+106.459744697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.062343 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" event={"ID":"338f89a1-1c2f-4e37-9572-c5b13d682ca9","Type":"ContainerStarted","Data":"46360fc83f1cc220589b13a8c27ffd0f5770b5b67075b08286b1c9ec648960cd"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.105705 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tqcqf" event={"ID":"47102097-389c-44ce-a25f-6b8d25a70e1d","Type":"ContainerStarted","Data":"a25cb8b272383a18db5c2568bf8295b848c048a6016c499f0de7d241d26f7b71"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.132644 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tqcqf" podStartSLOduration=7.132625668 podStartE2EDuration="7.132625668s" podCreationTimestamp="2025-12-12 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.131740766 +0000 UTC m=+106.029415598" watchObservedRunningTime="2025-12-12 16:16:46.132625668 +0000 UTC m=+106.030300500" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.134409 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podStartSLOduration=87.134402551 podStartE2EDuration="1m27.134402551s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.053841154 +0000 UTC m=+105.951516006" watchObservedRunningTime="2025-12-12 16:16:46.134402551 +0000 UTC m=+106.032077383" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.165736 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.166755 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.666741751 +0000 UTC m=+106.564416583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.202638 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" event={"ID":"9cc5b0f4-dc96-4a65-8404-f3d36ad70787","Type":"ContainerStarted","Data":"edb1be85faadf07ab0b8ea0d5a61ae6523d5c61fc75133c2a28ffc156bc794d1"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.269362 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.270708 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.770668798 +0000 UTC m=+106.668343780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.270910 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" event={"ID":"6e354e82-d648-4680-b0c8-e901bfcfbd5f","Type":"ContainerStarted","Data":"1b8886f3552073404488fbfde536fb57d7c18b1397ebdad914354193a33ea0ce"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.326959 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.331027 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:46 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:46 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:46 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.331095 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.373076 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.373477 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.873461388 +0000 UTC m=+106.771136220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.416206 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xknw6" podStartSLOduration=87.4161629 podStartE2EDuration="1m27.4161629s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.243186797 +0000 UTC m=+106.140861639" watchObservedRunningTime="2025-12-12 16:16:46.4161629 +0000 UTC m=+106.313837732" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.418077 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" podStartSLOduration=86.418071267 podStartE2EDuration="1m26.418071267s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.408532854 +0000 UTC m=+106.306207686" watchObservedRunningTime="2025-12-12 16:16:46.418071267 +0000 UTC m=+106.315746099" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.439598 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.439670 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" event={"ID":"124ec2f9-0e23-47da-b25f-66a13947465e","Type":"ContainerStarted","Data":"18d21cb5d26cc9e52a4b1cd662ffdd3411b90b4dce8f03e4d38e895ed45f046f"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.439754 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.439767 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kcw92" event={"ID":"124ec2f9-0e23-47da-b25f-66a13947465e","Type":"ContainerStarted","Data":"821a1e5c7483c039e70cdf4bd3be662cd8906606bd0abbb72d4d89a53f25635c"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.439787 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" event={"ID":"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca","Type":"ContainerStarted","Data":"0702298b930d9df41b5167fe6b3a06e9d2f7dd988ab81b6dc47a3b0d81221bce"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.460545 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" event={"ID":"9c49153e-af72-4d2f-8184-fa7ba43a5a3e","Type":"ContainerStarted","Data":"ee8088f28b3197cc5469945413422c389a49ba5ba0a440f3aa89c9ac372e7839"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.473664 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.474865 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:46.974848663 +0000 UTC m=+106.872523495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.480834 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" event={"ID":"693e66ed-f826-4819-a47d-f32faf9dab96","Type":"ContainerStarted","Data":"c8a5496c4a2472a0beb924c848fd9bb5edee60a3fd69c13df19d2b01d3a9ec7a"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.482866 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" event={"ID":"19e81fea-065e-43b5-8e56-49bfcfa342f7","Type":"ContainerStarted","Data":"371e9c27a2a1b4863d26a45d93c8501b34d0b3f1e281e503ae42d95a1a9e230b"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.499399 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" podStartSLOduration=87.499383812 podStartE2EDuration="1m27.499383812s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.464378218 +0000 UTC m=+106.362053050" watchObservedRunningTime="2025-12-12 16:16:46.499383812 +0000 UTC m=+106.397058644" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.520974 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-sm46g" event={"ID":"f967d508-b683-4df4-9be0-3a7fb5afa7bb","Type":"ContainerStarted","Data":"fdca212b909c7f735bc64f2733588572386b0252f8ee3ee0ccb2ee9c6af3fae7"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.524850 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.544599 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" event={"ID":"5a94df8d-2607-41a1-b1f9-21016895dcd6","Type":"ContainerStarted","Data":"158cfb1d84690be2e4cd14b1137db76e8e77421e894f66629f65967812dc332a"} Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.549045 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" podStartSLOduration=86.549030284 podStartE2EDuration="1m26.549030284s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.500630203 +0000 UTC m=+106.398305025" watchObservedRunningTime="2025-12-12 16:16:46.549030284 +0000 UTC m=+106.446705116" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.558397 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-njgb5" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.567307 5130 patch_prober.go:28] interesting pod/downloads-747b44746d-sm46g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.567418 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-sm46g" podUID="f967d508-b683-4df4-9be0-3a7fb5afa7bb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.580458 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.582606 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.082585954 +0000 UTC m=+106.980260786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.628950 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-sm46g" podStartSLOduration=87.628927435 podStartE2EDuration="1m27.628927435s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.619013513 +0000 UTC m=+106.516688345" watchObservedRunningTime="2025-12-12 16:16:46.628927435 +0000 UTC m=+106.526602267" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.682971 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.684103 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.184078021 +0000 UTC m=+107.081752853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.690763 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" podStartSLOduration=86.690721974 podStartE2EDuration="1m26.690721974s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:46.666674277 +0000 UTC m=+106.564349109" watchObservedRunningTime="2025-12-12 16:16:46.690721974 +0000 UTC m=+106.588396806" Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.785687 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.786419 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.286396989 +0000 UTC m=+107.184071821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.892971 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.893274 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.393223218 +0000 UTC m=+107.290898050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:46 crc kubenswrapper[5130]: I1212 16:16:46.893767 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:46 crc kubenswrapper[5130]: E1212 16:16:46.894740 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.394713354 +0000 UTC m=+107.292388186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:46.997985 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:46.999016 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.49899084 +0000 UTC m=+107.396665672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.100082 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.100507 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.600493878 +0000 UTC m=+107.498168710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.208394 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.208976 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.708952476 +0000 UTC m=+107.606627308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.316333 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.317759 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.817086126 +0000 UTC m=+107.714760958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.331658 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:47 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:47 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:47 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.331747 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.420410 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.420575 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.920534382 +0000 UTC m=+107.818209204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.421269 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.421749 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:47.921729271 +0000 UTC m=+107.819404103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.530365 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.530644 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.03062542 +0000 UTC m=+107.928300252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.578118 5130 ???:1] "http: TLS handshake error from 192.168.126.11:48448: no serving certificate available for the kubelet" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.620554 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" event={"ID":"1999cfc6-e5a0-4ddb-883d-71f861b286a8","Type":"ContainerStarted","Data":"49fb87e00825357375696fd03967de64a89b09c721bb15499b82f59aa1ef53cb"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.643559 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.644074 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.144054639 +0000 UTC m=+108.041729471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.647276 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" event={"ID":"e1875478-2fa5-47f4-9c0a-13afc9166e8e","Type":"ContainerStarted","Data":"511183a80ba7426282d8a2ef47537aafb960d27ee9ec54d29a02505745f2c79a"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.676542 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" event={"ID":"d259a06e-3949-41b6-a067-7c01441da4b1","Type":"ContainerStarted","Data":"6a61ed43182c84d5f5ba853b183f677998fddb6810cc65d32ca11633c12c5ced"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.677358 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.702819 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" event={"ID":"22a6a238-12c9-43ae-afbc-f9595d46e727","Type":"ContainerStarted","Data":"1a206e38f20382df21ab74f9a8bc73d1ec4e67d4b602fb6c4c84f186028e13b6"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.712795 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.713973 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-bg744" podStartSLOduration=87.713682389 podStartE2EDuration="1m27.713682389s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.658475601 +0000 UTC m=+107.556150433" watchObservedRunningTime="2025-12-12 16:16:47.713682389 +0000 UTC m=+107.611357221" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.715482 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" podStartSLOduration=88.715476633 podStartE2EDuration="1m28.715476633s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.712890339 +0000 UTC m=+107.610565191" watchObservedRunningTime="2025-12-12 16:16:47.715476633 +0000 UTC m=+107.613151465" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.746396 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.747939 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.247914204 +0000 UTC m=+108.145589036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.765444 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" event={"ID":"2a282672-c872-405b-9325-f8f48865334c","Type":"ContainerStarted","Data":"f829bccf553f311fbe0399fdad6aa3d254c7e9c5d9d156ef946482d99b1ddfcb"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.779326 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" event={"ID":"d943d968-b5e5-4d94-8fc7-8ba0013e5d76","Type":"ContainerStarted","Data":"a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.779977 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.824287 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-wff8v" podStartSLOduration=87.824268799 podStartE2EDuration="1m27.824268799s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.771609823 +0000 UTC m=+107.669284655" watchObservedRunningTime="2025-12-12 16:16:47.824268799 +0000 UTC m=+107.721943631" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.856125 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.859434 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.359416087 +0000 UTC m=+108.257090919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.867453 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podStartSLOduration=8.867418052 podStartE2EDuration="8.867418052s" podCreationTimestamp="2025-12-12 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.863345153 +0000 UTC m=+107.761020005" watchObservedRunningTime="2025-12-12 16:16:47.867418052 +0000 UTC m=+107.765092884" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.871492 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" event={"ID":"338f89a1-1c2f-4e37-9572-c5b13d682ca9","Type":"ContainerStarted","Data":"842035260bd8f358884ff1e9e32e97e45086ba20d373161cf2bfac2c0e3b12d3"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.871578 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" event={"ID":"338f89a1-1c2f-4e37-9572-c5b13d682ca9","Type":"ContainerStarted","Data":"763c3354186a52b009c7bc9d8a3b7e385e05239105963600b3e18a1fc2e2eef0"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.871727 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.890453 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" event={"ID":"6e354e82-d648-4680-b0c8-e901bfcfbd5f","Type":"ContainerStarted","Data":"1dcd89f28af7ef252cfb2bc92f4dd31152abc9c1480c01403d1fce09b9b419ba"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.891723 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.905454 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fzlkp" podStartSLOduration=88.90543163 podStartE2EDuration="1m28.90543163s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.904227041 +0000 UTC m=+107.801901873" watchObservedRunningTime="2025-12-12 16:16:47.90543163 +0000 UTC m=+107.803106462" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.909169 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-mrrt5" event={"ID":"a6c070b2-83ee-4c73-9201-3ab5dcc9aeca","Type":"ContainerStarted","Data":"6c3cf0fda0b75955c6688909a7042cccc9e909cc0ddf4f5c1e05ab91a1ae0218"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.935564 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-m8gw7" event={"ID":"9c49153e-af72-4d2f-8184-fa7ba43a5a3e","Type":"ContainerStarted","Data":"0e3540d26406f3e286a3bf3897e15d3fca47349581a6cdb18136a150ed74c525"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.957925 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:47 crc kubenswrapper[5130]: E1212 16:16:47.959635 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.459610743 +0000 UTC m=+108.357285575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.970756 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5twrv" podStartSLOduration=88.970728274 podStartE2EDuration="1m28.970728274s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:47.933105566 +0000 UTC m=+107.830780398" watchObservedRunningTime="2025-12-12 16:16:47.970728274 +0000 UTC m=+107.868403106" Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.987635 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-sm46g" event={"ID":"f967d508-b683-4df4-9be0-3a7fb5afa7bb","Type":"ContainerStarted","Data":"c8c2f5e065c129aafaa852e8c33f390ed98efea4433e1004cfcbe52588b48e61"} Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.991558 5130 patch_prober.go:28] interesting pod/downloads-747b44746d-sm46g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 12 16:16:47 crc kubenswrapper[5130]: I1212 16:16:47.991615 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-sm46g" podUID="f967d508-b683-4df4-9be0-3a7fb5afa7bb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.001645 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" event={"ID":"5a94df8d-2607-41a1-b1f9-21016895dcd6","Type":"ContainerStarted","Data":"5856eb7265c811de7f2f523fb984dbb979fdfb730a9716daca0516faa1c9d36e"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.003117 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.004818 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" podStartSLOduration=88.004807016 podStartE2EDuration="1m28.004807016s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.003877544 +0000 UTC m=+107.901552386" watchObservedRunningTime="2025-12-12 16:16:48.004807016 +0000 UTC m=+107.902481848" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.020805 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" event={"ID":"e13eeec0-72dd-418b-9180-87ca0d56870d","Type":"ContainerStarted","Data":"fd9d1e6fffa4e7035ed54facdeb72536d22a2dfeeb29ad14637caee2b9df5255"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.021565 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.024420 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.062619 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl44g" event={"ID":"9dc06dad-6486-4dd5-9456-40ce964abc7f","Type":"ContainerStarted","Data":"deeb7c532f7e9ff97edae23040bc9476667fa7315fcd0d11d04bb3711459c644"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.065403 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.068602 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.568581823 +0000 UTC m=+108.466256865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.081962 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-4v9cj" podStartSLOduration=88.081928939 podStartE2EDuration="1m28.081928939s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.03281335 +0000 UTC m=+107.930488192" watchObservedRunningTime="2025-12-12 16:16:48.081928939 +0000 UTC m=+107.979603771" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.094867 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" event={"ID":"097ff9f3-52cb-4063-a6a1-0c8178adccc9","Type":"ContainerStarted","Data":"3391bfa86c2024dc10e15e04e7423a676997c029078d391d67d5251710764922"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.126881 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" event={"ID":"dd1275f2-1d38-4b18-acdd-8f4f8e6cedf7","Type":"ContainerStarted","Data":"e8a16c2eedb5ea309065cf1325d9bc35ccbc22920d3bc23727707eb6d6d25770"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.149273 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" podStartSLOduration=89.149253583 podStartE2EDuration="1m29.149253583s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.134886912 +0000 UTC m=+108.032561774" watchObservedRunningTime="2025-12-12 16:16:48.149253583 +0000 UTC m=+108.046928415" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.163573 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" event={"ID":"2403b973-68b3-4a15-a444-7e271aea91c1","Type":"ContainerStarted","Data":"e82f3c40013aa0c8ba82cfe9c3578dd956565c3c11df9fe0b4ca3011e1952c54"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.167067 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.183952 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.683904039 +0000 UTC m=+108.581578881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.187881 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" event={"ID":"60d98f7f-99e4-4bb4-a7b6-48de2ff6071c","Type":"ContainerStarted","Data":"3079755b4f53a49c201ff53fd159500ca5448f20d5db7147259782bb8c231a63"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.212875 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" event={"ID":"be106c32-9849-49fd-9e4a-4b5b9c16920a","Type":"ContainerStarted","Data":"dd4a9a245634d37fbf31cebb3148a5ba3f3f135b439714cd7cae8524caf2c378"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.226960 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-krgxf" podStartSLOduration=88.22693657 podStartE2EDuration="1m28.22693657s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.214344062 +0000 UTC m=+108.112018904" watchObservedRunningTime="2025-12-12 16:16:48.22693657 +0000 UTC m=+108.124611402" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.227816 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" podStartSLOduration=88.227808591 podStartE2EDuration="1m28.227808591s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.164543266 +0000 UTC m=+108.062218118" watchObservedRunningTime="2025-12-12 16:16:48.227808591 +0000 UTC m=+108.125483423" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.247323 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" podStartSLOduration=88.247299337 podStartE2EDuration="1m28.247299337s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.246122758 +0000 UTC m=+108.143797600" watchObservedRunningTime="2025-12-12 16:16:48.247299337 +0000 UTC m=+108.144974169" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.290796 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.292103 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" event={"ID":"00c7f3b3-f4dd-4d19-9739-512a35f436f5","Type":"ContainerStarted","Data":"92d74a469ee39c0bcb94659e4e5d61a5da4a1013e55137e04404b875311378c7"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.294335 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.295096 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.795076383 +0000 UTC m=+108.692751215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.297682 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" podStartSLOduration=88.297656926 podStartE2EDuration="1m28.297656926s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.293421723 +0000 UTC m=+108.191096565" watchObservedRunningTime="2025-12-12 16:16:48.297656926 +0000 UTC m=+108.195331758" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.330958 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:48 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:48 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:48 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.331034 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.331281 5130 generic.go:358] "Generic (PLEG): container finished" podID="eb351b5c-811a-4e79-ace2-5d78737aef4c" containerID="e73d603339f20d6b058344e573b3022769f4aef3989544387c21fd5c6d9b69d6" exitCode=0 Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.333759 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" event={"ID":"eb351b5c-811a-4e79-ace2-5d78737aef4c","Type":"ContainerDied","Data":"e73d603339f20d6b058344e573b3022769f4aef3989544387c21fd5c6d9b69d6"} Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.341356 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dcs9d" podStartSLOduration=88.341324682 podStartE2EDuration="1m28.341324682s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.339061387 +0000 UTC m=+108.236736219" watchObservedRunningTime="2025-12-12 16:16:48.341324682 +0000 UTC m=+108.238999514" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.375951 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.392601 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.396213 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:48.896171451 +0000 UTC m=+108.793846283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.433219 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.446539 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" podStartSLOduration=88.44650614 podStartE2EDuration="1m28.44650614s" podCreationTimestamp="2025-12-12 16:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:48.442606965 +0000 UTC m=+108.340281807" watchObservedRunningTime="2025-12-12 16:16:48.44650614 +0000 UTC m=+108.344180972" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.495667 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.509615 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.009600581 +0000 UTC m=+108.907275413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.600634 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.600972 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.100955231 +0000 UTC m=+108.998630063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.703448 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.704067 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.204050058 +0000 UTC m=+109.101724890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.805374 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.805959 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.305932525 +0000 UTC m=+109.203607357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.881926 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lfwgk" Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.908927 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:48 crc kubenswrapper[5130]: E1212 16:16:48.909388 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.409362861 +0000 UTC m=+109.307037693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:48 crc kubenswrapper[5130]: I1212 16:16:48.920605 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-q8kdt"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.011442 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.011870 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.511846003 +0000 UTC m=+109.409520835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.022434 5130 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-brfdj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": context deadline exceeded" start-of-body= Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.022544 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": context deadline exceeded" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.120586 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.121146 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.621124651 +0000 UTC m=+109.518799483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.222135 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.222384 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.722347872 +0000 UTC m=+109.620022704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.222574 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.223092 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.72306559 +0000 UTC m=+109.620740422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.302708 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.311036 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.317897 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.322582 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.325026 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.325568 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.825520061 +0000 UTC m=+109.723194893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.332998 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:49 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:49 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:49 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.333101 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.374893 5130 generic.go:358] "Generic (PLEG): container finished" podID="693e66ed-f826-4819-a47d-f32faf9dab96" containerID="09500d2b077f68f5b99c04eabb78da56eccda243ab25998e3b263f045a781724" exitCode=0 Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.375216 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" event={"ID":"693e66ed-f826-4819-a47d-f32faf9dab96","Type":"ContainerDied","Data":"09500d2b077f68f5b99c04eabb78da56eccda243ab25998e3b263f045a781724"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.375267 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" event={"ID":"693e66ed-f826-4819-a47d-f32faf9dab96","Type":"ContainerStarted","Data":"13d0b1caba8ff04508e19ef48cc4f441a0defe38da2bb374f183ad06bbb3f8fc"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.378908 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl44g" event={"ID":"9dc06dad-6486-4dd5-9456-40ce964abc7f","Type":"ContainerStarted","Data":"72f37585a0bca9c3319c9d118e4a1e92be5ab220e989e938759c33df2ad65051"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.380240 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.384661 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-ndnxt" event={"ID":"097ff9f3-52cb-4063-a6a1-0c8178adccc9","Type":"ContainerStarted","Data":"b0e96f444e1ee35fcf70d22f374be7e87a2b80757684583044cb1a22b1f0b7a2"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.401472 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.406685 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fb358025eb77871c75cb9b40f8c7bc36aebb9927910b33781e814fb8ac191a85"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.407273 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.412270 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rl44g" podStartSLOduration=10.412163125 podStartE2EDuration="10.412163125s" podCreationTimestamp="2025-12-12 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:49.410513275 +0000 UTC m=+109.308188107" watchObservedRunningTime="2025-12-12 16:16:49.412163125 +0000 UTC m=+109.309837947" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.414047 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mhsj" event={"ID":"2403b973-68b3-4a15-a444-7e271aea91c1","Type":"ContainerStarted","Data":"04148d1229ca2effd5b317f84698290be897429a3303b7bd4170616a3927f3e6"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.429000 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.429049 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.429159 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmp7\" (UniqueName: \"kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.429226 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.429609 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:49.929593561 +0000 UTC m=+109.827268393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.437434 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xks9x" event={"ID":"be106c32-9849-49fd-9e4a-4b5b9c16920a","Type":"ContainerStarted","Data":"490cd66593cd166b4469bda51aed1e8350eb5c7ae43fc65e3b6052f97fe9b94e"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.444131 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" event={"ID":"00c7f3b3-f4dd-4d19-9739-512a35f436f5","Type":"ContainerStarted","Data":"1c7fb1a60726cefca00046de35c8122374885dfc874768d166b59cf01e9606e1"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.461085 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" event={"ID":"eb351b5c-811a-4e79-ace2-5d78737aef4c","Type":"ContainerStarted","Data":"fe133c92f4d4789962dbf83bd3c2a2cacb014ba4367f57bbf5c106ffaab48d5f"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.461644 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.461604142 podStartE2EDuration="28.461604142s" podCreationTimestamp="2025-12-12 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:49.443723346 +0000 UTC m=+109.341398188" watchObservedRunningTime="2025-12-12 16:16:49.461604142 +0000 UTC m=+109.359278974" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.461977 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.470480 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" event={"ID":"e1875478-2fa5-47f4-9c0a-13afc9166e8e","Type":"ContainerStarted","Data":"b72306f835f46e7986e2ff285c6595b12bcadfb52bf2f35b4bd94e06a97d63eb"} Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.471699 5130 patch_prober.go:28] interesting pod/downloads-747b44746d-sm46g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.471785 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-sm46g" podUID="f967d508-b683-4df4-9be0-3a7fb5afa7bb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.501379 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" podStartSLOduration=90.501365233 podStartE2EDuration="1m30.501365233s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:49.497906189 +0000 UTC m=+109.395581031" watchObservedRunningTime="2025-12-12 16:16:49.501365233 +0000 UTC m=+109.399040065" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.501696 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.510137 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.514433 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.514709 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.518276 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.533711 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.534079 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmp7\" (UniqueName: \"kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.534148 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.534209 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.535321 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.535413 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.035395384 +0000 UTC m=+109.933070206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.536436 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-2w9hn" podStartSLOduration=90.536410519 podStartE2EDuration="1m30.536410519s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:49.533749924 +0000 UTC m=+109.431424756" watchObservedRunningTime="2025-12-12 16:16:49.536410519 +0000 UTC m=+109.434085351" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.543736 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.621117 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmp7\" (UniqueName: \"kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7\") pod \"certified-operators-pvzzz\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.638119 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcdz8\" (UniqueName: \"kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.643073 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.643233 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.645552 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.645683 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.646278 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.146259761 +0000 UTC m=+110.043934593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.710507 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.747806 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.748042 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.748689 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.748951 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.248928167 +0000 UTC m=+110.146602999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.749193 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.749328 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.749383 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.749521 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hcdz8\" (UniqueName: \"kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.750144 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.750672 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.750967 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.250958487 +0000 UTC m=+110.148633319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.788631 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcdz8\" (UniqueName: \"kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8\") pod \"community-operators-2gt6h\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.840709 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.850880 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.851114 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.851143 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.851190 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8qq\" (UniqueName: \"kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.851423 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.351393599 +0000 UTC m=+110.249068431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.899925 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.910849 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.912049 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.953111 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8qq\" (UniqueName: \"kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.953250 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.953324 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.953346 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.953730 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: E1212 16:16:49.954243 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.4542279 +0000 UTC m=+110.351902732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.954359 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:49 crc kubenswrapper[5130]: I1212 16:16:49.982737 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8qq\" (UniqueName: \"kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq\") pod \"certified-operators-kxjp8\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.054203 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.054407 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.554375075 +0000 UTC m=+110.452049897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.054693 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.054898 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.054961 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.054984 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxpj\" (UniqueName: \"kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.055279 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.555267626 +0000 UTC m=+110.452942458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.097585 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.116528 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.155848 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.156098 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.156122 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxpj\" (UniqueName: \"kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.156199 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.156250 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.656222411 +0000 UTC m=+110.553897243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.156969 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.157438 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: W1212 16:16:50.157865 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1a12a40_8493_41e1_84b7_312fc948fca8.slice/crio-70771a8a130e6322df73890d22e5b58e9c784d9164e5ed9740d937291a171571 WatchSource:0}: Error finding container 70771a8a130e6322df73890d22e5b58e9c784d9164e5ed9740d937291a171571: Status 404 returned error can't find the container with id 70771a8a130e6322df73890d22e5b58e9c784d9164e5ed9740d937291a171571 Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.184440 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxpj\" (UniqueName: \"kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj\") pod \"community-operators-p7s65\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.248877 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.258570 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.258875 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.758863017 +0000 UTC m=+110.656537849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.344839 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:50 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:50 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:50 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.344938 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.367832 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.368497 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.868452973 +0000 UTC m=+110.766127795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.467070 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:16:50 crc kubenswrapper[5130]: W1212 16:16:50.469258 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3686d912_c8e4_413f_b036_f206a4e826a2.slice/crio-5cc1da989e963af873e82696b122995145445095ec336e5b958ae3ddef9bfffd WatchSource:0}: Error finding container 5cc1da989e963af873e82696b122995145445095ec336e5b958ae3ddef9bfffd: Status 404 returned error can't find the container with id 5cc1da989e963af873e82696b122995145445095ec336e5b958ae3ddef9bfffd Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.470221 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.470648 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:50.970626197 +0000 UTC m=+110.868301029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.552580 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.559614 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" event={"ID":"693e66ed-f826-4819-a47d-f32faf9dab96","Type":"ContainerStarted","Data":"0c62422d26a12ade9affd98e441083ff184d6f0243966821d10b78e30a462a95"} Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.585774 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerStarted","Data":"b7222411b3f5b2c07c23cec910ae8077781b1ba52eee3ba591530d28314e3557"} Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.585864 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerStarted","Data":"70771a8a130e6322df73890d22e5b58e9c784d9164e5ed9740d937291a171571"} Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.595310 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerStarted","Data":"5cc1da989e963af873e82696b122995145445095ec336e5b958ae3ddef9bfffd"} Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.596056 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" gracePeriod=30 Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.597128 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.598769 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.098652063 +0000 UTC m=+110.996326895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.603993 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.611889 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.111871146 +0000 UTC m=+111.009545978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.643298 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" podStartSLOduration=91.643258702 podStartE2EDuration="1m31.643258702s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:50.629835434 +0000 UTC m=+110.527510266" watchObservedRunningTime="2025-12-12 16:16:50.643258702 +0000 UTC m=+110.540933534" Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.713374 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.713579 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.213502867 +0000 UTC m=+111.111177699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.714040 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.714580 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.214553862 +0000 UTC m=+111.112228694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.823117 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.823343 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.823563 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.323535093 +0000 UTC m=+111.221209925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.823669 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.824157 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.324150698 +0000 UTC m=+111.221825530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: W1212 16:16:50.828431 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5957e518_15e6_4acf_9e45_4985b7713fc8.slice/crio-36bd50d659f1abd49597b7cae2eaed8aebe612ec36c3f9fbc5758f96ffbde8ed WatchSource:0}: Error finding container 36bd50d659f1abd49597b7cae2eaed8aebe612ec36c3f9fbc5758f96ffbde8ed: Status 404 returned error can't find the container with id 36bd50d659f1abd49597b7cae2eaed8aebe612ec36c3f9fbc5758f96ffbde8ed Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.924728 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.925344 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.425296668 +0000 UTC m=+111.322971500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:50 crc kubenswrapper[5130]: I1212 16:16:50.926039 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:50 crc kubenswrapper[5130]: E1212 16:16:50.926508 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.426488377 +0000 UTC m=+111.324163209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.027759 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.028121 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.528069087 +0000 UTC m=+111.425743929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.028367 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.028800 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.528779584 +0000 UTC m=+111.426454596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.130075 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.130412 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.630358454 +0000 UTC m=+111.528033286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.131093 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.131689 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.631663086 +0000 UTC m=+111.529338118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.232256 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.232541 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.732476157 +0000 UTC m=+111.630150989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.232930 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.233292 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.733277537 +0000 UTC m=+111.630952369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.326909 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:51 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:51 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:51 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.327058 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.334896 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.335553 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.835500933 +0000 UTC m=+111.733175765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.437549 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.437946 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:51.937931433 +0000 UTC m=+111.835606265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.492172 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.502609 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.506808 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.507383 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.538415 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.538863 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.538918 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9w8\" (UniqueName: \"kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.538944 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.539432 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.039412411 +0000 UTC m=+111.937087333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.620557 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerStarted","Data":"4f5f7fa1a8db052822e01db0820c2072f4c3ff8177b85e7fd8eb4cac99d50eb3"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.620642 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerStarted","Data":"36bd50d659f1abd49597b7cae2eaed8aebe612ec36c3f9fbc5758f96ffbde8ed"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.632462 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerStarted","Data":"065f8ce6c69b7680313e715beb5f43833d2c2ad2a400593e9de9f40d21f7bf39"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.632552 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerStarted","Data":"ff8c45863778a48a425a28a9a87918b0efc06a9a71abddaf0a58cf0518f7b451"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.641034 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.641100 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9w8\" (UniqueName: \"kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.641124 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.641778 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.646951 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.647480 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.647568 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.147550541 +0000 UTC m=+112.045225373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.647972 5130 generic.go:358] "Generic (PLEG): container finished" podID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerID="b7222411b3f5b2c07c23cec910ae8077781b1ba52eee3ba591530d28314e3557" exitCode=0 Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.648757 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerDied","Data":"b7222411b3f5b2c07c23cec910ae8077781b1ba52eee3ba591530d28314e3557"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.703891 5130 generic.go:358] "Generic (PLEG): container finished" podID="3686d912-c8e4-413f-b036-f206a4e826a2" containerID="0b4113c7d36d2a230bc4e2acb1da128399bd31376c24477255787da86e629e81" exitCode=0 Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.704250 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerDied","Data":"0b4113c7d36d2a230bc4e2acb1da128399bd31376c24477255787da86e629e81"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.706458 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9w8\" (UniqueName: \"kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8\") pod \"redhat-marketplace-s7x92\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.719880 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" event={"ID":"e0adb788-edae-4099-900e-8af998a81f87","Type":"ContainerStarted","Data":"c7cd24f56f61a70b3b1bd508ca18167dc46bd3561d25c915f25345a1e1afbc45"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.727832 5130 generic.go:358] "Generic (PLEG): container finished" podID="19e81fea-065e-43b5-8e56-49bfcfa342f7" containerID="371e9c27a2a1b4863d26a45d93c8501b34d0b3f1e281e503ae42d95a1a9e230b" exitCode=0 Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.728789 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" event={"ID":"19e81fea-065e-43b5-8e56-49bfcfa342f7","Type":"ContainerDied","Data":"371e9c27a2a1b4863d26a45d93c8501b34d0b3f1e281e503ae42d95a1a9e230b"} Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.747063 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.747295 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.247261776 +0000 UTC m=+112.144936618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.748028 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.748709 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.248700161 +0000 UTC m=+112.146374993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.834837 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.850253 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.850478 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.350444935 +0000 UTC m=+112.248119767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.851229 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.852015 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.352005623 +0000 UTC m=+112.249680455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.890901 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.899413 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.913873 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.954021 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.954219 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.954317 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.45427107 +0000 UTC m=+112.351945902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.954506 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9q2v\" (UniqueName: \"kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.954743 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:51 crc kubenswrapper[5130]: I1212 16:16:51.954918 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:51 crc kubenswrapper[5130]: E1212 16:16:51.955117 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.45509525 +0000 UTC m=+112.352770302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.055918 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.056194 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.056286 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.056338 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9q2v\" (UniqueName: \"kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.056473 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.556413623 +0000 UTC m=+112.454088455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.056707 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.057298 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.057358 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.557350126 +0000 UTC m=+112.455024948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.057592 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.097583 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9q2v\" (UniqueName: \"kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v\") pod \"redhat-marketplace-mgp9n\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.130943 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:16:52 crc kubenswrapper[5130]: W1212 16:16:52.136357 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aaf652b_1019_4193_839d_875d12cc1e27.slice/crio-f1da0765a97fe218a374080c0f1f06e2731cd63af36a922455361a4960727e20 WatchSource:0}: Error finding container f1da0765a97fe218a374080c0f1f06e2731cd63af36a922455361a4960727e20: Status 404 returned error can't find the container with id f1da0765a97fe218a374080c0f1f06e2731cd63af36a922455361a4960727e20 Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.158720 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.159600 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.659548071 +0000 UTC m=+112.557222903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.231969 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.236006 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.252987 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.261646 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.262329 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.262500 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.262763 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.263247 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.763221082 +0000 UTC m=+112.660895924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.328212 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:52 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:52 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:52 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.328330 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.364257 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.364441 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.864410033 +0000 UTC m=+112.762084865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.364559 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.364814 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.364867 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.365432 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.865421418 +0000 UTC m=+112.763096440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.426997 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.427072 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.431030 5130 patch_prober.go:28] interesting pod/console-64d44f6ddf-zhgm9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.431192 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-zhgm9" podUID="4651322b-9aec-4667-afa3-1602ad5176fe" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.469055 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.469302 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.469422 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.469464 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:52.969421367 +0000 UTC m=+112.867096199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.469531 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.494607 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.496815 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.551964 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.552040 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.552208 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.555105 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.575640 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.576348 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.076323257 +0000 UTC m=+112.973998089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.599337 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.678691 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.678960 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.178928052 +0000 UTC m=+113.076602884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.679450 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.679508 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.679716 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knbf\" (UniqueName: \"kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.679799 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.680100 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.18009245 +0000 UTC m=+113.077767282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.782552 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.782699 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.782821 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.282774547 +0000 UTC m=+113.180449379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.783194 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5knbf\" (UniqueName: \"kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.783347 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.783612 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.783747 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.784080 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.284064549 +0000 UTC m=+113.181739381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.784303 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.785308 5130 ???:1] "http: TLS handshake error from 192.168.126.11:54766: no serving certificate available for the kubelet" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.793873 5130 generic.go:358] "Generic (PLEG): container finished" podID="1aaf652b-1019-4193-839d-875d12cc1e27" containerID="40faa368c7bb6179b1e51cd173a9e13967aa1bdeffc22c992fde0f7dda5ed0fe" exitCode=0 Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.794136 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerDied","Data":"40faa368c7bb6179b1e51cd173a9e13967aa1bdeffc22c992fde0f7dda5ed0fe"} Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.794230 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerStarted","Data":"f1da0765a97fe218a374080c0f1f06e2731cd63af36a922455361a4960727e20"} Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.797397 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerStarted","Data":"6f2c7e4ee8005058653be608254682e6f8ccf99963c0cc49075bb88e3c4fee94"} Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.802702 5130 generic.go:358] "Generic (PLEG): container finished" podID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerID="4f5f7fa1a8db052822e01db0820c2072f4c3ff8177b85e7fd8eb4cac99d50eb3" exitCode=0 Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.802779 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerDied","Data":"4f5f7fa1a8db052822e01db0820c2072f4c3ff8177b85e7fd8eb4cac99d50eb3"} Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.806839 5130 generic.go:358] "Generic (PLEG): container finished" podID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerID="065f8ce6c69b7680313e715beb5f43833d2c2ad2a400593e9de9f40d21f7bf39" exitCode=0 Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.806991 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerDied","Data":"065f8ce6c69b7680313e715beb5f43833d2c2ad2a400593e9de9f40d21f7bf39"} Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.810125 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5knbf\" (UniqueName: \"kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf\") pod \"redhat-operators-9ndfc\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.878049 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.892327 5130 patch_prober.go:28] interesting pod/downloads-747b44746d-sm46g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.892420 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-sm46g" podUID="f967d508-b683-4df4-9be0-3a7fb5afa7bb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.893840 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:52 crc kubenswrapper[5130]: E1212 16:16:52.897598 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.397542319 +0000 UTC m=+113.295217151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.914656 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.934632 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.934868 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:52 crc kubenswrapper[5130]: I1212 16:16:52.997529 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:52.998343 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.498325169 +0000 UTC m=+113.396000001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.072259 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.101670 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.101971 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.601914337 +0000 UTC m=+113.499589159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.102111 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.102200 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.102239 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.102308 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj7n\" (UniqueName: \"kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.102748 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.602731387 +0000 UTC m=+113.500406219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.187752 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.204050 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.204666 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.204724 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj7n\" (UniqueName: \"kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.204831 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.205562 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.205790 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.205852 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.705836655 +0000 UTC m=+113.603511487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.240510 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj7n\" (UniqueName: \"kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n\") pod \"redhat-operators-2blsm\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.263073 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.263854 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19e81fea-065e-43b5-8e56-49bfcfa342f7" containerName="collect-profiles" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.263883 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e81fea-065e-43b5-8e56-49bfcfa342f7" containerName="collect-profiles" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.264042 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="19e81fea-065e-43b5-8e56-49bfcfa342f7" containerName="collect-profiles" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.264106 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.275823 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.280603 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.286359 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.300623 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.305484 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume\") pod \"19e81fea-065e-43b5-8e56-49bfcfa342f7\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.306042 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume\") pod \"19e81fea-065e-43b5-8e56-49bfcfa342f7\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.306766 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csnbw\" (UniqueName: \"kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw\") pod \"19e81fea-065e-43b5-8e56-49bfcfa342f7\" (UID: \"19e81fea-065e-43b5-8e56-49bfcfa342f7\") " Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.306893 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "19e81fea-065e-43b5-8e56-49bfcfa342f7" (UID: "19e81fea-065e-43b5-8e56-49bfcfa342f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.307579 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.307710 5130 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e81fea-065e-43b5-8e56-49bfcfa342f7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.307986 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.807970748 +0000 UTC m=+113.705645580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.311073 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19e81fea-065e-43b5-8e56-49bfcfa342f7" (UID: "19e81fea-065e-43b5-8e56-49bfcfa342f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.324919 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw" (OuterVolumeSpecName: "kube-api-access-csnbw") pod "19e81fea-065e-43b5-8e56-49bfcfa342f7" (UID: "19e81fea-065e-43b5-8e56-49bfcfa342f7"). InnerVolumeSpecName "kube-api-access-csnbw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.325563 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.338353 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:53 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:53 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:53 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.338876 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.399307 5130 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.409061 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.409674 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.90962709 +0000 UTC m=+113.807301912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.409960 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.410019 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.410214 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.410445 5130 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e81fea-065e-43b5-8e56-49bfcfa342f7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.410457 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-csnbw\" (UniqueName: \"kubernetes.io/projected/19e81fea-065e-43b5-8e56-49bfcfa342f7-kube-api-access-csnbw\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.410816 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:53.910808309 +0000 UTC m=+113.808483141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.503121 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.503461 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.503543 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.511441 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.512470 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.01240692 +0000 UTC m=+113.910081752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.512602 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: E1212 16:16:53.513051 5130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-12 16:16:54.013042875 +0000 UTC m=+113.910717697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jqtjf" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.513373 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.513410 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.513593 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.513614 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.518794 5130 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-12T16:16:53.399347539Z","UUID":"7051a5e2-2663-4bc7-94ec-75dae1044083","Handler":null,"Name":"","Endpoint":""} Dec 12 16:16:53 crc kubenswrapper[5130]: W1212 16:16:53.533172 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod573d2658_6034_4715_a9ad_a7828b324fd5.slice/crio-ac163b2bf1c1d578b9037f0b59dae7dd262bb9d00e98558c9f328edeb8dabdb0 WatchSource:0}: Error finding container ac163b2bf1c1d578b9037f0b59dae7dd262bb9d00e98558c9f328edeb8dabdb0: Status 404 returned error can't find the container with id ac163b2bf1c1d578b9037f0b59dae7dd262bb9d00e98558c9f328edeb8dabdb0 Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.543520 5130 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.543577 5130 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.544212 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.612627 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-49zmj" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.614117 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.625153 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.629204 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.717021 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.722227 5130 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.722273 5130 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.798268 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jqtjf\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.815960 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:16:53 crc kubenswrapper[5130]: W1212 16:16:53.837779 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb3b2430_d128_4d2d_9518_6be0ca0ddc6f.slice/crio-4e9f04b1e852fa9141933d1eca7d926563f8ad649e9315eff76350ec836adf3d WatchSource:0}: Error finding container 4e9f04b1e852fa9141933d1eca7d926563f8ad649e9315eff76350ec836adf3d: Status 404 returned error can't find the container with id 4e9f04b1e852fa9141933d1eca7d926563f8ad649e9315eff76350ec836adf3d Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.838589 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6e33370d-b952-4a48-a6cb-73e765546903","Type":"ContainerStarted","Data":"ef76e0aa0ff828ddf012582e32a39ad73fae468c8e2f7f3b7834e520001cf401"} Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.851098 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" event={"ID":"19e81fea-065e-43b5-8e56-49bfcfa342f7","Type":"ContainerDied","Data":"328df9b4f48f0adc7c6483781e32bef2bbf38c7a3bc72162f9752fc54e642716"} Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.851167 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328df9b4f48f0adc7c6483781e32bef2bbf38c7a3bc72162f9752fc54e642716" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.851520 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425935-7hkrm" Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.856573 5130 generic.go:358] "Generic (PLEG): container finished" podID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerID="bbe21f163134a76fb060a74769ac915b36f75e2f19c37cef8c4ecf4493e03ed2" exitCode=0 Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.856844 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerDied","Data":"bbe21f163134a76fb060a74769ac915b36f75e2f19c37cef8c4ecf4493e03ed2"} Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.862124 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerStarted","Data":"ac163b2bf1c1d578b9037f0b59dae7dd262bb9d00e98558c9f328edeb8dabdb0"} Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.882377 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" event={"ID":"e0adb788-edae-4099-900e-8af998a81f87","Type":"ContainerStarted","Data":"778d9b28dc6e04609397d2b05822d1260657b723614e68ee470aa7df2b60e667"} Dec 12 16:16:53 crc kubenswrapper[5130]: I1212 16:16:53.890672 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-sg8rq" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.027117 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.332487 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:54 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:54 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:54 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.332566 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.339038 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.339889 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.340027 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.345033 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.360598 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.366055 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.366696 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.378820 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.383685 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.407958 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 12 16:16:54 crc kubenswrapper[5130]: W1212 16:16:54.448428 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0ad9be1e_b38d_4280_8a67_505c4461c55d.slice/crio-24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7 WatchSource:0}: Error finding container 24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7: Status 404 returned error can't find the container with id 24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7 Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.453306 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.462424 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e8bbb2d-9d91-4541-a2d2-891ab81dd883-metrics-certs\") pod \"network-metrics-daemon-jhhcn\" (UID: \"4e8bbb2d-9d91-4541-a2d2-891ab81dd883\") " pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.575341 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.587391 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.600371 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.609683 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jhhcn" Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.636953 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.932214 5130 generic.go:358] "Generic (PLEG): container finished" podID="573d2658-6034-4715-a9ad-a7828b324fd5" containerID="a44d2a4eeeeb09f66e7765e59ee141b97a02eccc0257c3866e57084f4a9d1b9b" exitCode=0 Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.932312 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerDied","Data":"a44d2a4eeeeb09f66e7765e59ee141b97a02eccc0257c3866e57084f4a9d1b9b"} Dec 12 16:16:54 crc kubenswrapper[5130]: I1212 16:16:54.944489 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" event={"ID":"162da780-4bd3-4acf-b114-06ae104fc8ad","Type":"ContainerStarted","Data":"4d802f5dbe85c769c5b4afa6aaa710f145332a5713a213a44b0344adeeb96222"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.055059 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" event={"ID":"e0adb788-edae-4099-900e-8af998a81f87","Type":"ContainerStarted","Data":"29978154106da8f34547e10387d3310c19984811357148903cfea95660893352"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.115111 5130 generic.go:358] "Generic (PLEG): container finished" podID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerID="3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147" exitCode=0 Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.115858 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerDied","Data":"3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.116015 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerStarted","Data":"4e9f04b1e852fa9141933d1eca7d926563f8ad649e9315eff76350ec836adf3d"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.136624 5130 generic.go:358] "Generic (PLEG): container finished" podID="6e33370d-b952-4a48-a6cb-73e765546903" containerID="38a2cc42fbb3ea06b304998ee684662f49cbb70135cad55e340386b71e715ff0" exitCode=0 Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.136841 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6e33370d-b952-4a48-a6cb-73e765546903","Type":"ContainerDied","Data":"38a2cc42fbb3ea06b304998ee684662f49cbb70135cad55e340386b71e715ff0"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.144431 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0ad9be1e-b38d-4280-8a67-505c4461c55d","Type":"ContainerStarted","Data":"24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7"} Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.333475 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:55 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:55 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:55 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.333901 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:55 crc kubenswrapper[5130]: W1212 16:16:55.403321 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-f1d3e6758829eb05a7b38e13aecaad83049b28eb2bb658e628e89dbc8458f4c8 WatchSource:0}: Error finding container f1d3e6758829eb05a7b38e13aecaad83049b28eb2bb658e628e89dbc8458f4c8: Status 404 returned error can't find the container with id f1d3e6758829eb05a7b38e13aecaad83049b28eb2bb658e628e89dbc8458f4c8 Dec 12 16:16:55 crc kubenswrapper[5130]: I1212 16:16:55.560681 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jhhcn"] Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.163366 5130 generic.go:358] "Generic (PLEG): container finished" podID="0ad9be1e-b38d-4280-8a67-505c4461c55d" containerID="1455605e00220700f9e5f0933201b7232dbb3752850efe04ff10c3c0f2c25ddf" exitCode=0 Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.163565 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0ad9be1e-b38d-4280-8a67-505c4461c55d","Type":"ContainerDied","Data":"1455605e00220700f9e5f0933201b7232dbb3752850efe04ff10c3c0f2c25ddf"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.183512 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jhhcn" event={"ID":"4e8bbb2d-9d91-4541-a2d2-891ab81dd883","Type":"ContainerStarted","Data":"54bc15a964e93b2b97abe7832b8620ad0b73ee6a55fc6aac574f17bf4ef514c3"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.194481 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a2ad0509ac0e11ff6f34df0ce90ec909e6e2ac90653e628dbf4e975ea9c7e816"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.194537 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"acb628d8928528762ad899b9ca2ae3961510926ca5dadc7a016a7f22008b5399"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.212955 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"9ae41a1cf88f12ab3ae878a8fe7c01e188e9bf5cc91283216ee83d87f3a17afe"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.213007 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"f1d3e6758829eb05a7b38e13aecaad83049b28eb2bb658e628e89dbc8458f4c8"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.215079 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.216585 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" event={"ID":"162da780-4bd3-4acf-b114-06ae104fc8ad","Type":"ContainerStarted","Data":"a39c80875bc5a6660406644e4cb5ad2ca4830e3788cd5f6a1d14fba813a1e0fc"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.216766 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.260215 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" event={"ID":"e0adb788-edae-4099-900e-8af998a81f87","Type":"ContainerStarted","Data":"b637228e00ecc8e64a50469e6d75ed252632d7cda7826351f73586b7d35acbd7"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.296109 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"a42909a1aedd118e8ae6266062cc51816f5cad149d72c881db45dba86bd66f47"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.296619 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"46d88d467caf27dc4b86e7ddcbe6d4e9acb4ad8dee93e430fe19e83b37470960"} Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.311906 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" podStartSLOduration=97.311869247 podStartE2EDuration="1m37.311869247s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:56.305262386 +0000 UTC m=+116.202937218" watchObservedRunningTime="2025-12-12 16:16:56.311869247 +0000 UTC m=+116.209544079" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.341140 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:56 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:56 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:56 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.341329 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.353702 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-59hhc" podStartSLOduration=17.353660087 podStartE2EDuration="17.353660087s" podCreationTimestamp="2025-12-12 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:56.349117506 +0000 UTC m=+116.246792338" watchObservedRunningTime="2025-12-12 16:16:56.353660087 +0000 UTC m=+116.251334919" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.689900 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.848435 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access\") pod \"6e33370d-b952-4a48-a6cb-73e765546903\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.848564 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir\") pod \"6e33370d-b952-4a48-a6cb-73e765546903\" (UID: \"6e33370d-b952-4a48-a6cb-73e765546903\") " Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.849014 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6e33370d-b952-4a48-a6cb-73e765546903" (UID: "6e33370d-b952-4a48-a6cb-73e765546903"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.891703 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6e33370d-b952-4a48-a6cb-73e765546903" (UID: "6e33370d-b952-4a48-a6cb-73e765546903"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.951172 5130 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e33370d-b952-4a48-a6cb-73e765546903-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:56 crc kubenswrapper[5130]: I1212 16:16:56.951253 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e33370d-b952-4a48-a6cb-73e765546903-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.310930 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6e33370d-b952-4a48-a6cb-73e765546903","Type":"ContainerDied","Data":"ef76e0aa0ff828ddf012582e32a39ad73fae468c8e2f7f3b7834e520001cf401"} Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.310977 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef76e0aa0ff828ddf012582e32a39ad73fae468c8e2f7f3b7834e520001cf401" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.311057 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.318402 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jhhcn" event={"ID":"4e8bbb2d-9d91-4541-a2d2-891ab81dd883","Type":"ContainerStarted","Data":"6990beb3d18c95a15a0b55ee50d3dc2b5e88ee79b1b3e5f328bd2b6d8aef1328"} Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.331391 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:57 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:57 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:57 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.331545 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.608404 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.662378 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir\") pod \"0ad9be1e-b38d-4280-8a67-505c4461c55d\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.662547 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ad9be1e-b38d-4280-8a67-505c4461c55d" (UID: "0ad9be1e-b38d-4280-8a67-505c4461c55d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.662802 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access\") pod \"0ad9be1e-b38d-4280-8a67-505c4461c55d\" (UID: \"0ad9be1e-b38d-4280-8a67-505c4461c55d\") " Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.663546 5130 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ad9be1e-b38d-4280-8a67-505c4461c55d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.675649 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ad9be1e-b38d-4280-8a67-505c4461c55d" (UID: "0ad9be1e-b38d-4280-8a67-505c4461c55d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.740851 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rl44g" Dec 12 16:16:57 crc kubenswrapper[5130]: I1212 16:16:57.768217 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ad9be1e-b38d-4280-8a67-505c4461c55d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:16:57 crc kubenswrapper[5130]: E1212 16:16:57.798911 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:16:57 crc kubenswrapper[5130]: E1212 16:16:57.803649 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:16:57 crc kubenswrapper[5130]: E1212 16:16:57.806337 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:16:57 crc kubenswrapper[5130]: E1212 16:16:57.806421 5130 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:16:58 crc kubenswrapper[5130]: I1212 16:16:58.325672 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:58 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:58 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:58 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:58 crc kubenswrapper[5130]: I1212 16:16:58.325762 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:58 crc kubenswrapper[5130]: I1212 16:16:58.333198 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"0ad9be1e-b38d-4280-8a67-505c4461c55d","Type":"ContainerDied","Data":"24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7"} Dec 12 16:16:58 crc kubenswrapper[5130]: I1212 16:16:58.333249 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24759bb2246b4ec47d790729a7b754f9ac9ba3507bc3ec20b520d87ac9c1c2f7" Dec 12 16:16:58 crc kubenswrapper[5130]: I1212 16:16:58.333337 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 12 16:16:59 crc kubenswrapper[5130]: I1212 16:16:59.329581 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:16:59 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:16:59 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:16:59 crc kubenswrapper[5130]: healthz check failed Dec 12 16:16:59 crc kubenswrapper[5130]: I1212 16:16:59.330119 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:16:59 crc kubenswrapper[5130]: I1212 16:16:59.365672 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jhhcn" event={"ID":"4e8bbb2d-9d91-4541-a2d2-891ab81dd883","Type":"ContainerStarted","Data":"b10e3a20bad4f3505a710b888368a440ba49ee59b92e4b641ca6ae2f9b1005eb"} Dec 12 16:16:59 crc kubenswrapper[5130]: I1212 16:16:59.389143 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jhhcn" podStartSLOduration=100.389105955 podStartE2EDuration="1m40.389105955s" podCreationTimestamp="2025-12-12 16:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:16:59.383692823 +0000 UTC m=+119.281367655" watchObservedRunningTime="2025-12-12 16:16:59.389105955 +0000 UTC m=+119.286780787" Dec 12 16:16:59 crc kubenswrapper[5130]: I1212 16:16:59.494543 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-sm46g" Dec 12 16:17:00 crc kubenswrapper[5130]: I1212 16:17:00.329800 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:00 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:17:00 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:17:00 crc kubenswrapper[5130]: healthz check failed Dec 12 16:17:00 crc kubenswrapper[5130]: I1212 16:17:00.330225 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:00 crc kubenswrapper[5130]: I1212 16:17:00.613723 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:17:01 crc kubenswrapper[5130]: I1212 16:17:01.328006 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:01 crc kubenswrapper[5130]: [-]has-synced failed: reason withheld Dec 12 16:17:01 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:17:01 crc kubenswrapper[5130]: healthz check failed Dec 12 16:17:01 crc kubenswrapper[5130]: I1212 16:17:01.328550 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:02 crc kubenswrapper[5130]: I1212 16:17:02.326739 5130 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bqttx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 12 16:17:02 crc kubenswrapper[5130]: [+]has-synced ok Dec 12 16:17:02 crc kubenswrapper[5130]: [+]process-running ok Dec 12 16:17:02 crc kubenswrapper[5130]: healthz check failed Dec 12 16:17:02 crc kubenswrapper[5130]: I1212 16:17:02.326836 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" podUID="1a9ac0b2-cad1-44fa-993c-0ae63193f086" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 12 16:17:02 crc kubenswrapper[5130]: I1212 16:17:02.427904 5130 patch_prober.go:28] interesting pod/console-64d44f6ddf-zhgm9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 12 16:17:02 crc kubenswrapper[5130]: I1212 16:17:02.428095 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-zhgm9" podUID="4651322b-9aec-4667-afa3-1602ad5176fe" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 12 16:17:02 crc kubenswrapper[5130]: I1212 16:17:02.870488 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:17:03 crc kubenswrapper[5130]: I1212 16:17:03.066788 5130 ???:1] "http: TLS handshake error from 192.168.126.11:41968: no serving certificate available for the kubelet" Dec 12 16:17:03 crc kubenswrapper[5130]: I1212 16:17:03.328709 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:17:03 crc kubenswrapper[5130]: I1212 16:17:03.367684 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-bqttx" Dec 12 16:17:06 crc kubenswrapper[5130]: I1212 16:17:06.431968 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:17:06 crc kubenswrapper[5130]: I1212 16:17:06.434051 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" containerID="cri-o://6a61ed43182c84d5f5ba853b183f677998fddb6810cc65d32ca11633c12c5ced" gracePeriod=30 Dec 12 16:17:06 crc kubenswrapper[5130]: I1212 16:17:06.451947 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:17:06 crc kubenswrapper[5130]: I1212 16:17:06.452461 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" containerID="cri-o://5afed13e7cab1d026459fabea793580ca962d81aa42c1db7a9cb82b49da4a6ad" gracePeriod=30 Dec 12 16:17:07 crc kubenswrapper[5130]: I1212 16:17:07.678318 5130 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-flnsl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 12 16:17:07 crc kubenswrapper[5130]: I1212 16:17:07.678501 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 12 16:17:07 crc kubenswrapper[5130]: E1212 16:17:07.784649 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:07 crc kubenswrapper[5130]: E1212 16:17:07.786697 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:07 crc kubenswrapper[5130]: E1212 16:17:07.788386 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:07 crc kubenswrapper[5130]: E1212 16:17:07.788518 5130 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:11 crc kubenswrapper[5130]: I1212 16:17:11.461509 5130 generic.go:358] "Generic (PLEG): container finished" podID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerID="5afed13e7cab1d026459fabea793580ca962d81aa42c1db7a9cb82b49da4a6ad" exitCode=0 Dec 12 16:17:11 crc kubenswrapper[5130]: I1212 16:17:11.461619 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" event={"ID":"a78c6a97-054e-484e-aae2-a33bd3bb7b40","Type":"ContainerDied","Data":"5afed13e7cab1d026459fabea793580ca962d81aa42c1db7a9cb82b49da4a6ad"} Dec 12 16:17:11 crc kubenswrapper[5130]: I1212 16:17:11.463870 5130 generic.go:358] "Generic (PLEG): container finished" podID="d259a06e-3949-41b6-a067-7c01441da4b1" containerID="6a61ed43182c84d5f5ba853b183f677998fddb6810cc65d32ca11633c12c5ced" exitCode=0 Dec 12 16:17:11 crc kubenswrapper[5130]: I1212 16:17:11.463942 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" event={"ID":"d259a06e-3949-41b6-a067-7c01441da4b1","Type":"ContainerDied","Data":"6a61ed43182c84d5f5ba853b183f677998fddb6810cc65d32ca11633c12c5ced"} Dec 12 16:17:13 crc kubenswrapper[5130]: I1212 16:17:13.373069 5130 patch_prober.go:28] interesting pod/authentication-operator-7f5c659b84-6t92c container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 12 16:17:13 crc kubenswrapper[5130]: I1212 16:17:13.374153 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6t92c" podUID="d55f43e2-46df-4460-b17f-0daa75b89154" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 12 16:17:14 crc kubenswrapper[5130]: I1212 16:17:14.129999 5130 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-zksq4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 16:17:14 crc kubenswrapper[5130]: I1212 16:17:14.130131 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 16:17:15 crc kubenswrapper[5130]: I1212 16:17:15.398755 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:17:15 crc kubenswrapper[5130]: I1212 16:17:15.405110 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-zhgm9" Dec 12 16:17:17 crc kubenswrapper[5130]: I1212 16:17:17.327172 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:17:17 crc kubenswrapper[5130]: I1212 16:17:17.678158 5130 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-flnsl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 12 16:17:17 crc kubenswrapper[5130]: I1212 16:17:17.678349 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 12 16:17:17 crc kubenswrapper[5130]: E1212 16:17:17.785220 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:17 crc kubenswrapper[5130]: E1212 16:17:17.786995 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:17 crc kubenswrapper[5130]: E1212 16:17:17.788751 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:17 crc kubenswrapper[5130]: E1212 16:17:17.788801 5130 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:20 crc kubenswrapper[5130]: I1212 16:17:20.613961 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mjzlp" Dec 12 16:17:23 crc kubenswrapper[5130]: I1212 16:17:23.539766 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-q8kdt_d943d968-b5e5-4d94-8fc7-8ba0013e5d76/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:23 crc kubenswrapper[5130]: I1212 16:17:23.540141 5130 generic.go:358] "Generic (PLEG): container finished" podID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" exitCode=137 Dec 12 16:17:23 crc kubenswrapper[5130]: I1212 16:17:23.540274 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" event={"ID":"d943d968-b5e5-4d94-8fc7-8ba0013e5d76","Type":"ContainerDied","Data":"a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45"} Dec 12 16:17:23 crc kubenswrapper[5130]: I1212 16:17:23.581657 5130 ???:1] "http: TLS handshake error from 192.168.126.11:44232: no serving certificate available for the kubelet" Dec 12 16:17:24 crc kubenswrapper[5130]: I1212 16:17:24.130493 5130 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-zksq4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 12 16:17:24 crc kubenswrapper[5130]: I1212 16:17:24.130598 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.860692 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863286 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ad9be1e-b38d-4280-8a67-505c4461c55d" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863318 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9be1e-b38d-4280-8a67-505c4461c55d" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863353 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e33370d-b952-4a48-a6cb-73e765546903" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863360 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e33370d-b952-4a48-a6cb-73e765546903" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863492 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e33370d-b952-4a48-a6cb-73e765546903" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.863508 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ad9be1e-b38d-4280-8a67-505c4461c55d" containerName="pruner" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.878036 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.878285 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.882660 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.882998 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.963309 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:26 crc kubenswrapper[5130]: I1212 16:17:26.963441 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.065048 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.065159 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.065284 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.092554 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.213426 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.322594 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.679347 5130 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-flnsl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 12 16:17:27 crc kubenswrapper[5130]: I1212 16:17:27.679448 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 12 16:17:27 crc kubenswrapper[5130]: E1212 16:17:27.781802 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45 is running failed: container process not found" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:27 crc kubenswrapper[5130]: E1212 16:17:27.782237 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45 is running failed: container process not found" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:27 crc kubenswrapper[5130]: E1212 16:17:27.782856 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45 is running failed: container process not found" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 12 16:17:27 crc kubenswrapper[5130]: E1212 16:17:27.782903 5130 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.766030 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-q8kdt_d943d968-b5e5-4d94-8fc7-8ba0013e5d76/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.766961 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.841798 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.873774 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir\") pod \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.873830 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7hhb\" (UniqueName: \"kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb\") pod \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.873862 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist\") pod \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.873920 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready\") pod \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\" (UID: \"d943d968-b5e5-4d94-8fc7-8ba0013e5d76\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.880573 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "d943d968-b5e5-4d94-8fc7-8ba0013e5d76" (UID: "d943d968-b5e5-4d94-8fc7-8ba0013e5d76"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.881406 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "d943d968-b5e5-4d94-8fc7-8ba0013e5d76" (UID: "d943d968-b5e5-4d94-8fc7-8ba0013e5d76"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.881434 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready" (OuterVolumeSpecName: "ready") pod "d943d968-b5e5-4d94-8fc7-8ba0013e5d76" (UID: "d943d968-b5e5-4d94-8fc7-8ba0013e5d76"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.883943 5130 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.883989 5130 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.884006 5130 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-ready\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.886024 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb" (OuterVolumeSpecName: "kube-api-access-x7hhb") pod "d943d968-b5e5-4d94-8fc7-8ba0013e5d76" (UID: "d943d968-b5e5-4d94-8fc7-8ba0013e5d76"). InnerVolumeSpecName "kube-api-access-x7hhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.895302 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896262 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896283 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896293 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896301 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896412 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" containerName="kube-multus-additional-cni-plugins" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.896429 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" containerName="controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.897683 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.912718 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.916488 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.982018 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.983093 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.983123 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.983275 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" containerName="route-controller-manager" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.985854 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca\") pod \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.985951 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv2mw\" (UniqueName: \"kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.985980 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986078 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert\") pod \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986110 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986145 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhxj\" (UniqueName: \"kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj\") pod \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986172 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986244 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986463 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config\") pod \"d259a06e-3949-41b6-a067-7c01441da4b1\" (UID: \"d259a06e-3949-41b6-a067-7c01441da4b1\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986509 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp\") pod \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986552 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config\") pod \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\" (UID: \"a78c6a97-054e-484e-aae2-a33bd3bb7b40\") " Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.986809 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x7hhb\" (UniqueName: \"kubernetes.io/projected/d943d968-b5e5-4d94-8fc7-8ba0013e5d76-kube-api-access-x7hhb\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.987655 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp" (OuterVolumeSpecName: "tmp") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.989493 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config" (OuterVolumeSpecName: "config") pod "a78c6a97-054e-484e-aae2-a33bd3bb7b40" (UID: "a78c6a97-054e-484e-aae2-a33bd3bb7b40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.990075 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca" (OuterVolumeSpecName: "client-ca") pod "a78c6a97-054e-484e-aae2-a33bd3bb7b40" (UID: "a78c6a97-054e-484e-aae2-a33bd3bb7b40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.991477 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.994613 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.995996 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config" (OuterVolumeSpecName: "config") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:30 crc kubenswrapper[5130]: I1212 16:17:30.996520 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp" (OuterVolumeSpecName: "tmp") pod "a78c6a97-054e-484e-aae2-a33bd3bb7b40" (UID: "a78c6a97-054e-484e-aae2-a33bd3bb7b40"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.002339 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.004807 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj" (OuterVolumeSpecName: "kube-api-access-vfhxj") pod "a78c6a97-054e-484e-aae2-a33bd3bb7b40" (UID: "a78c6a97-054e-484e-aae2-a33bd3bb7b40"). InnerVolumeSpecName "kube-api-access-vfhxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.006063 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a78c6a97-054e-484e-aae2-a33bd3bb7b40" (UID: "a78c6a97-054e-484e-aae2-a33bd3bb7b40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.007464 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw" (OuterVolumeSpecName: "kube-api-access-wv2mw") pod "d259a06e-3949-41b6-a067-7c01441da4b1" (UID: "d259a06e-3949-41b6-a067-7c01441da4b1"). InnerVolumeSpecName "kube-api-access-wv2mw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.009195 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.021663 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.080747 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.087805 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2d5\" (UniqueName: \"kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.087872 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.087896 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088138 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088308 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088333 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088512 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wv2mw\" (UniqueName: \"kubernetes.io/projected/d259a06e-3949-41b6-a067-7c01441da4b1-kube-api-access-wv2mw\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088537 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d259a06e-3949-41b6-a067-7c01441da4b1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088552 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a78c6a97-054e-484e-aae2-a33bd3bb7b40-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088566 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d259a06e-3949-41b6-a067-7c01441da4b1-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088577 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfhxj\" (UniqueName: \"kubernetes.io/projected/a78c6a97-054e-484e-aae2-a33bd3bb7b40-kube-api-access-vfhxj\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088588 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088600 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088611 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d259a06e-3949-41b6-a067-7c01441da4b1-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088621 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a78c6a97-054e-484e-aae2-a33bd3bb7b40-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088634 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.088645 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a78c6a97-054e-484e-aae2-a33bd3bb7b40-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:31 crc kubenswrapper[5130]: W1212 16:17:31.094425 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod24732491_f54a_410e_a29e_c8fb26fd9cde.slice/crio-225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702 WatchSource:0}: Error finding container 225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702: Status 404 returned error can't find the container with id 225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.190267 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2d5\" (UniqueName: \"kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191000 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191034 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8759b\" (UniqueName: \"kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191070 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191204 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191407 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191542 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191639 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191728 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191807 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.191851 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.192212 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.192838 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.193790 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.197964 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.205606 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.210901 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2d5\" (UniqueName: \"kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5\") pod \"controller-manager-69f958c846-qd8rg\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.255052 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.293919 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.294619 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8759b\" (UniqueName: \"kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.294714 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.294808 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.294915 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.298295 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.300711 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.301642 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.301763 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.322616 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8759b\" (UniqueName: \"kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b\") pod \"route-controller-manager-f4599bd79-7rg9b\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.380169 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:31 crc kubenswrapper[5130]: E1212 16:17:31.390813 5130 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86909e43_e62d_4532_8232_aa3ca0de5d28.slice/crio-04ade99dbd97ff9459f2fe6675da507000404b4f17742875047d87c9473dd0ce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86909e43_e62d_4532_8232_aa3ca0de5d28.slice/crio-conmon-04ade99dbd97ff9459f2fe6675da507000404b4f17742875047d87c9473dd0ce.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.590568 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.640964 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerStarted","Data":"401e173f0e693614a546c3cea9ff0cace58c184cd9cdd3104503b186b8193d00"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.645208 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerDied","Data":"8b6e0f771c54e0f2031e922831f4e9a8890ad74e45ce729b1967a4918169b40b"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.646979 5130 generic.go:358] "Generic (PLEG): container finished" podID="1aaf652b-1019-4193-839d-875d12cc1e27" containerID="8b6e0f771c54e0f2031e922831f4e9a8890ad74e45ce729b1967a4918169b40b" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.656914 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" event={"ID":"94e12db4-0aff-472b-9bb0-82451f7e2e17","Type":"ContainerStarted","Data":"a91e75a5dac6930aac28aa81157a93d650d81215f1bbe01d548fac770f1d603f"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.659533 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.660110 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4" event={"ID":"a78c6a97-054e-484e-aae2-a33bd3bb7b40","Type":"ContainerDied","Data":"fe12aa686f8f130f2ed0db07a57b150e66a6ef1f7c1242cf968402245bac1b07"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.660224 5130 scope.go:117] "RemoveContainer" containerID="5afed13e7cab1d026459fabea793580ca962d81aa42c1db7a9cb82b49da4a6ad" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.675288 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.675289 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-flnsl" event={"ID":"d259a06e-3949-41b6-a067-7c01441da4b1","Type":"ContainerDied","Data":"2bf714089818fd6477a262dc7b43a76fa700b53d570bf643af2f365afa9909f2"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.681247 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-q8kdt_d943d968-b5e5-4d94-8fc7-8ba0013e5d76/kube-multus-additional-cni-plugins/0.log" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.681415 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" event={"ID":"d943d968-b5e5-4d94-8fc7-8ba0013e5d76","Type":"ContainerDied","Data":"f9dd92ceda1a3912c46704c32056af091c91e3402addb86408de2701845d893b"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.681519 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-q8kdt" Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.704000 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerStarted","Data":"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.707663 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"24732491-f54a-410e-a29e-c8fb26fd9cde","Type":"ContainerStarted","Data":"225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.721933 5130 generic.go:358] "Generic (PLEG): container finished" podID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerID="04ade99dbd97ff9459f2fe6675da507000404b4f17742875047d87c9473dd0ce" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.722056 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerDied","Data":"04ade99dbd97ff9459f2fe6675da507000404b4f17742875047d87c9473dd0ce"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.726828 5130 generic.go:358] "Generic (PLEG): container finished" podID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerID="8de988694708cb26378759f8d7684338d87466d2b75cc374cef28f4917599fa1" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.726913 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerDied","Data":"8de988694708cb26378759f8d7684338d87466d2b75cc374cef28f4917599fa1"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.728128 5130 generic.go:358] "Generic (PLEG): container finished" podID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerID="c96b56d094ce6b0f2c68d90265e85262accf9682b73480f3085eb5ac9480fe0d" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.728218 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerDied","Data":"c96b56d094ce6b0f2c68d90265e85262accf9682b73480f3085eb5ac9480fe0d"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.736840 5130 generic.go:358] "Generic (PLEG): container finished" podID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerID="4510f8c6500cd79ead24de9fdb8d77ed1941057499119f5133a4d37c2a96bbc5" exitCode=0 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.737056 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerDied","Data":"4510f8c6500cd79ead24de9fdb8d77ed1941057499119f5133a4d37c2a96bbc5"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.757024 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerStarted","Data":"ae7e967711e223d099a40d4ed44911cbe8c26c71f4671c594e5898a37bde8057"} Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.821513 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:31 crc kubenswrapper[5130]: W1212 16:17:31.861937 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6c91f7f_5413_4050_bfac_93d5daa7e99f.slice/crio-8e06db9851a81391ddff393260eef28cf7e0fe05ed2c6b8c6e0a25403f2c97d7 WatchSource:0}: Error finding container 8e06db9851a81391ddff393260eef28cf7e0fe05ed2c6b8c6e0a25403f2c97d7: Status 404 returned error can't find the container with id 8e06db9851a81391ddff393260eef28cf7e0fe05ed2c6b8c6e0a25403f2c97d7 Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.910771 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-q8kdt"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.914926 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-q8kdt"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.935904 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.936019 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-flnsl"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.950491 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.953160 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-zksq4"] Dec 12 16:17:31 crc kubenswrapper[5130]: I1212 16:17:31.955891 5130 scope.go:117] "RemoveContainer" containerID="6a61ed43182c84d5f5ba853b183f677998fddb6810cc65d32ca11633c12c5ced" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.061969 5130 scope.go:117] "RemoveContainer" containerID="a43d81fa9124491ab3f0c328136dc9f005a1eb4d472434916a6f523433e26c45" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.695423 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78c6a97-054e-484e-aae2-a33bd3bb7b40" path="/var/lib/kubelet/pods/a78c6a97-054e-484e-aae2-a33bd3bb7b40/volumes" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.698152 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d259a06e-3949-41b6-a067-7c01441da4b1" path="/var/lib/kubelet/pods/d259a06e-3949-41b6-a067-7c01441da4b1/volumes" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.699076 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d943d968-b5e5-4d94-8fc7-8ba0013e5d76" path="/var/lib/kubelet/pods/d943d968-b5e5-4d94-8fc7-8ba0013e5d76/volumes" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.807056 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerStarted","Data":"d92aefce9af711ed35124010d6d51b4b332460e21787d7e2fc7107e34549e8e7"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.823089 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerStarted","Data":"25ff570b6f424117e42ff1743f3a91241b1aff3d4c69db869f3bafe8dccb5cf2"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.828456 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerStarted","Data":"53bab943b5773cdc5239723f9ca5da10a767ae7072c2d10430133a329b0826be"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.833001 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerStarted","Data":"8a404432fdc03966e4b4413b026d4d5da46820bf9ded19a3ceb42d61ab1be328"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.837049 5130 generic.go:358] "Generic (PLEG): container finished" podID="3686d912-c8e4-413f-b036-f206a4e826a2" containerID="ae7e967711e223d099a40d4ed44911cbe8c26c71f4671c594e5898a37bde8057" exitCode=0 Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.838240 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerDied","Data":"ae7e967711e223d099a40d4ed44911cbe8c26c71f4671c594e5898a37bde8057"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.838282 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerStarted","Data":"5fb7a27d9d232fecf29af0ea2cf521c7fcffd29cc516ee00c9b3fdc12860c3c9"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.848397 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerStarted","Data":"56b6b5fa1fdb979a756c382f6c6262c415947ed7dac44278f932ddd7ef046da8"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.860836 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" event={"ID":"94e12db4-0aff-472b-9bb0-82451f7e2e17","Type":"ContainerStarted","Data":"09f6d61d6c86a25345a80608865a5ab3f3bc90d93937cb4854fd17383bcdf547"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.861853 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.862433 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvzzz" podStartSLOduration=4.800518296 podStartE2EDuration="43.862397282s" podCreationTimestamp="2025-12-12 16:16:49 +0000 UTC" firstStartedPulling="2025-12-12 16:16:51.676908838 +0000 UTC m=+111.574583660" lastFinishedPulling="2025-12-12 16:17:30.738787814 +0000 UTC m=+150.636462646" observedRunningTime="2025-12-12 16:17:32.86128755 +0000 UTC m=+152.758962402" watchObservedRunningTime="2025-12-12 16:17:32.862397282 +0000 UTC m=+152.760072114" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.868899 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.876736 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" event={"ID":"e6c91f7f-5413-4050-bfac-93d5daa7e99f","Type":"ContainerStarted","Data":"5057f81cf498c9fac5a2c4b9da22d6d917e22145a2c597d5bd2c3692801c460c"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.876808 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" event={"ID":"e6c91f7f-5413-4050-bfac-93d5daa7e99f","Type":"ContainerStarted","Data":"8e06db9851a81391ddff393260eef28cf7e0fe05ed2c6b8c6e0a25403f2c97d7"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.877830 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.879154 5130 patch_prober.go:28] interesting pod/route-controller-manager-f4599bd79-7rg9b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.879227 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.884399 5130 generic.go:358] "Generic (PLEG): container finished" podID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerID="90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673" exitCode=0 Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.884519 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerDied","Data":"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.889855 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"24732491-f54a-410e-a29e-c8fb26fd9cde","Type":"ContainerStarted","Data":"bc173b9a5509bc74f9c9051a18f70ee71b9aaa74fdcce66f385973a6a4b9f941"} Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.896816 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s7x92" podStartSLOduration=4.033937532 podStartE2EDuration="41.896789259s" podCreationTimestamp="2025-12-12 16:16:51 +0000 UTC" firstStartedPulling="2025-12-12 16:16:52.795249582 +0000 UTC m=+112.692924414" lastFinishedPulling="2025-12-12 16:17:30.658101299 +0000 UTC m=+150.555776141" observedRunningTime="2025-12-12 16:17:32.886772731 +0000 UTC m=+152.784447593" watchObservedRunningTime="2025-12-12 16:17:32.896789259 +0000 UTC m=+152.794464091" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.925764 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" podStartSLOduration=6.925729699 podStartE2EDuration="6.925729699s" podCreationTimestamp="2025-12-12 16:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:32.923631049 +0000 UTC m=+152.821305891" watchObservedRunningTime="2025-12-12 16:17:32.925729699 +0000 UTC m=+152.823404531" Dec 12 16:17:32 crc kubenswrapper[5130]: I1212 16:17:32.963823 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2gt6h" podStartSLOduration=4.945104303 podStartE2EDuration="43.963796471s" podCreationTimestamp="2025-12-12 16:16:49 +0000 UTC" firstStartedPulling="2025-12-12 16:16:51.705088676 +0000 UTC m=+111.602763498" lastFinishedPulling="2025-12-12 16:17:30.723780834 +0000 UTC m=+150.621455666" observedRunningTime="2025-12-12 16:17:32.951131898 +0000 UTC m=+152.848806750" watchObservedRunningTime="2025-12-12 16:17:32.963796471 +0000 UTC m=+152.861471303" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.013165 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" podStartSLOduration=7.006921339 podStartE2EDuration="7.006921339s" podCreationTimestamp="2025-12-12 16:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:32.981266113 +0000 UTC m=+152.878940965" watchObservedRunningTime="2025-12-12 16:17:33.006921339 +0000 UTC m=+152.904596171" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.036241 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=7.036216639 podStartE2EDuration="7.036216639s" podCreationTimestamp="2025-12-12 16:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:33.03067617 +0000 UTC m=+152.928351002" watchObservedRunningTime="2025-12-12 16:17:33.036216639 +0000 UTC m=+152.933891471" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.895639 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerStarted","Data":"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953"} Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.898614 5130 generic.go:358] "Generic (PLEG): container finished" podID="24732491-f54a-410e-a29e-c8fb26fd9cde" containerID="bc173b9a5509bc74f9c9051a18f70ee71b9aaa74fdcce66f385973a6a4b9f941" exitCode=0 Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.899068 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"24732491-f54a-410e-a29e-c8fb26fd9cde","Type":"ContainerDied","Data":"bc173b9a5509bc74f9c9051a18f70ee71b9aaa74fdcce66f385973a6a4b9f941"} Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.901891 5130 generic.go:358] "Generic (PLEG): container finished" podID="573d2658-6034-4715-a9ad-a7828b324fd5" containerID="401e173f0e693614a546c3cea9ff0cace58c184cd9cdd3104503b186b8193d00" exitCode=0 Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.901942 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerDied","Data":"401e173f0e693614a546c3cea9ff0cace58c184cd9cdd3104503b186b8193d00"} Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.911035 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.932467 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2blsm" podStartSLOduration=6.344135354 podStartE2EDuration="41.932449263s" podCreationTimestamp="2025-12-12 16:16:52 +0000 UTC" firstStartedPulling="2025-12-12 16:16:55.126459466 +0000 UTC m=+115.024134298" lastFinishedPulling="2025-12-12 16:17:30.714773375 +0000 UTC m=+150.612448207" observedRunningTime="2025-12-12 16:17:33.931852196 +0000 UTC m=+153.829527048" watchObservedRunningTime="2025-12-12 16:17:33.932449263 +0000 UTC m=+153.830124095" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.961718 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7s65" podStartSLOduration=5.878733662 podStartE2EDuration="44.961694202s" podCreationTimestamp="2025-12-12 16:16:49 +0000 UTC" firstStartedPulling="2025-12-12 16:16:51.625204666 +0000 UTC m=+111.522879498" lastFinishedPulling="2025-12-12 16:17:30.708165206 +0000 UTC m=+150.605840038" observedRunningTime="2025-12-12 16:17:33.958552022 +0000 UTC m=+153.856226874" watchObservedRunningTime="2025-12-12 16:17:33.961694202 +0000 UTC m=+153.859369034" Dec 12 16:17:33 crc kubenswrapper[5130]: I1212 16:17:33.986158 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxjp8" podStartSLOduration=7.109994584 podStartE2EDuration="44.986139193s" podCreationTimestamp="2025-12-12 16:16:49 +0000 UTC" firstStartedPulling="2025-12-12 16:16:52.830444921 +0000 UTC m=+112.728119753" lastFinishedPulling="2025-12-12 16:17:30.70658953 +0000 UTC m=+150.604264362" observedRunningTime="2025-12-12 16:17:33.980749879 +0000 UTC m=+153.878424721" watchObservedRunningTime="2025-12-12 16:17:33.986139193 +0000 UTC m=+153.883814025" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.004531 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgp9n" podStartSLOduration=6.147026083 podStartE2EDuration="43.004508401s" podCreationTimestamp="2025-12-12 16:16:51 +0000 UTC" firstStartedPulling="2025-12-12 16:16:53.857791342 +0000 UTC m=+113.755466174" lastFinishedPulling="2025-12-12 16:17:30.71527366 +0000 UTC m=+150.612948492" observedRunningTime="2025-12-12 16:17:34.000479585 +0000 UTC m=+153.898154437" watchObservedRunningTime="2025-12-12 16:17:34.004508401 +0000 UTC m=+153.902183233" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.854723 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.876228 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.876498 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.908607 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.908701 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.908916 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.922501 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerStarted","Data":"e5ac6bb6b6a834b1d5556d9b1331cd2084885f081082cc31d77c1b8643f8d55b"} Dec 12 16:17:34 crc kubenswrapper[5130]: I1212 16:17:34.995375 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ndfc" podStartSLOduration=7.11527278 podStartE2EDuration="42.995344449s" podCreationTimestamp="2025-12-12 16:16:52 +0000 UTC" firstStartedPulling="2025-12-12 16:16:54.934268953 +0000 UTC m=+114.831943785" lastFinishedPulling="2025-12-12 16:17:30.814340622 +0000 UTC m=+150.712015454" observedRunningTime="2025-12-12 16:17:34.990922292 +0000 UTC m=+154.888597114" watchObservedRunningTime="2025-12-12 16:17:34.995344449 +0000 UTC m=+154.893019281" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.010273 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.010458 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.010468 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.010620 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.011037 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.046634 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access\") pod \"installer-12-crc\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.200571 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.274541 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.315264 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir\") pod \"24732491-f54a-410e-a29e-c8fb26fd9cde\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.315378 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access\") pod \"24732491-f54a-410e-a29e-c8fb26fd9cde\" (UID: \"24732491-f54a-410e-a29e-c8fb26fd9cde\") " Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.315611 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24732491-f54a-410e-a29e-c8fb26fd9cde" (UID: "24732491-f54a-410e-a29e-c8fb26fd9cde"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.343374 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24732491-f54a-410e-a29e-c8fb26fd9cde" (UID: "24732491-f54a-410e-a29e-c8fb26fd9cde"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.416640 5130 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24732491-f54a-410e-a29e-c8fb26fd9cde-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.416677 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24732491-f54a-410e-a29e-c8fb26fd9cde-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.511488 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 12 16:17:35 crc kubenswrapper[5130]: W1212 16:17:35.519371 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod214aeed8_f6a2_4251_b4d0_c81fd217c7c2.slice/crio-e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d WatchSource:0}: Error finding container e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d: Status 404 returned error can't find the container with id e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.934588 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"214aeed8-f6a2-4251-b4d0-c81fd217c7c2","Type":"ContainerStarted","Data":"e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d"} Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.937295 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"24732491-f54a-410e-a29e-c8fb26fd9cde","Type":"ContainerDied","Data":"225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702"} Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.937366 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225a20de8d07f41a40e55510e4ee7645069a3a5efa08475a72cc5ac4c3d86702" Dec 12 16:17:35 crc kubenswrapper[5130]: I1212 16:17:35.937548 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 12 16:17:36 crc kubenswrapper[5130]: I1212 16:17:36.945888 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"214aeed8-f6a2-4251-b4d0-c81fd217c7c2","Type":"ContainerStarted","Data":"7eea8ddfdf2799e96a4d403b19f067a1e7d06758be2fb080a0c405d345d4b8b4"} Dec 12 16:17:36 crc kubenswrapper[5130]: I1212 16:17:36.978314 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.978288291 podStartE2EDuration="2.978288291s" podCreationTimestamp="2025-12-12 16:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:36.973948556 +0000 UTC m=+156.871623388" watchObservedRunningTime="2025-12-12 16:17:36.978288291 +0000 UTC m=+156.875963123" Dec 12 16:17:39 crc kubenswrapper[5130]: I1212 16:17:39.644334 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:17:39 crc kubenswrapper[5130]: I1212 16:17:39.644384 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:17:39 crc kubenswrapper[5130]: I1212 16:17:39.841743 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:17:39 crc kubenswrapper[5130]: I1212 16:17:39.842423 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:17:40 crc kubenswrapper[5130]: I1212 16:17:40.099305 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:40 crc kubenswrapper[5130]: I1212 16:17:40.099364 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:40 crc kubenswrapper[5130]: I1212 16:17:40.250322 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:40 crc kubenswrapper[5130]: I1212 16:17:40.250391 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.066314 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.068005 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.068780 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.069404 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.115742 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.116010 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.120283 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.128958 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.835838 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.835929 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:17:41 crc kubenswrapper[5130]: I1212 16:17:41.886814 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.027304 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.232730 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.232778 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.308699 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.477630 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.891874 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.892253 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.957783 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:17:42 crc kubenswrapper[5130]: I1212 16:17:42.985348 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kxjp8" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="registry-server" containerID="cri-o://53bab943b5773cdc5239723f9ca5da10a767ae7072c2d10430133a329b0826be" gracePeriod=2 Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.027222 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.030844 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.264586 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.264624 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.309945 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.478070 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:17:43 crc kubenswrapper[5130]: I1212 16:17:43.478546 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7s65" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="registry-server" containerID="cri-o://25ff570b6f424117e42ff1743f3a91241b1aff3d4c69db869f3bafe8dccb5cf2" gracePeriod=2 Dec 12 16:17:44 crc kubenswrapper[5130]: I1212 16:17:44.043986 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:44 crc kubenswrapper[5130]: I1212 16:17:44.883804 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:17:45 crc kubenswrapper[5130]: I1212 16:17:45.007488 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mgp9n" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="registry-server" containerID="cri-o://d92aefce9af711ed35124010d6d51b4b332460e21787d7e2fc7107e34549e8e7" gracePeriod=2 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.021016 5130 generic.go:358] "Generic (PLEG): container finished" podID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerID="d92aefce9af711ed35124010d6d51b4b332460e21787d7e2fc7107e34549e8e7" exitCode=0 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.021693 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerDied","Data":"d92aefce9af711ed35124010d6d51b4b332460e21787d7e2fc7107e34549e8e7"} Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.031314 5130 generic.go:358] "Generic (PLEG): container finished" podID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerID="25ff570b6f424117e42ff1743f3a91241b1aff3d4c69db869f3bafe8dccb5cf2" exitCode=0 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.031391 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerDied","Data":"25ff570b6f424117e42ff1743f3a91241b1aff3d4c69db869f3bafe8dccb5cf2"} Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.035491 5130 generic.go:358] "Generic (PLEG): container finished" podID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerID="53bab943b5773cdc5239723f9ca5da10a767ae7072c2d10430133a329b0826be" exitCode=0 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.035616 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerDied","Data":"53bab943b5773cdc5239723f9ca5da10a767ae7072c2d10430133a329b0826be"} Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.249100 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.397580 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj8qq\" (UniqueName: \"kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq\") pod \"5319f16c-f39a-4bd6-836a-cb336099dbc2\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.397709 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities\") pod \"5319f16c-f39a-4bd6-836a-cb336099dbc2\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.397751 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content\") pod \"5319f16c-f39a-4bd6-836a-cb336099dbc2\" (UID: \"5319f16c-f39a-4bd6-836a-cb336099dbc2\") " Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.400404 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities" (OuterVolumeSpecName: "utilities") pod "5319f16c-f39a-4bd6-836a-cb336099dbc2" (UID: "5319f16c-f39a-4bd6-836a-cb336099dbc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.404556 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq" (OuterVolumeSpecName: "kube-api-access-gj8qq") pod "5319f16c-f39a-4bd6-836a-cb336099dbc2" (UID: "5319f16c-f39a-4bd6-836a-cb336099dbc2"). InnerVolumeSpecName "kube-api-access-gj8qq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.443617 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5319f16c-f39a-4bd6-836a-cb336099dbc2" (UID: "5319f16c-f39a-4bd6-836a-cb336099dbc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.475397 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.476129 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" podUID="94e12db4-0aff-472b-9bb0-82451f7e2e17" containerName="controller-manager" containerID="cri-o://09f6d61d6c86a25345a80608865a5ab3f3bc90d93937cb4854fd17383bcdf547" gracePeriod=30 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.504005 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gj8qq\" (UniqueName: \"kubernetes.io/projected/5319f16c-f39a-4bd6-836a-cb336099dbc2-kube-api-access-gj8qq\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.504066 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.504091 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5319f16c-f39a-4bd6-836a-cb336099dbc2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.522421 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.522836 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerName="route-controller-manager" containerID="cri-o://5057f81cf498c9fac5a2c4b9da22d6d917e22145a2c597d5bd2c3692801c460c" gracePeriod=30 Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.934586 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:46 crc kubenswrapper[5130]: I1212 16:17:46.939591 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.012237 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content\") pod \"5957e518-15e6-4acf-9e45-4985b7713fc8\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.012330 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities\") pod \"5957e518-15e6-4acf-9e45-4985b7713fc8\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.012387 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxpj\" (UniqueName: \"kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj\") pod \"5957e518-15e6-4acf-9e45-4985b7713fc8\" (UID: \"5957e518-15e6-4acf-9e45-4985b7713fc8\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.013206 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities" (OuterVolumeSpecName: "utilities") pod "5957e518-15e6-4acf-9e45-4985b7713fc8" (UID: "5957e518-15e6-4acf-9e45-4985b7713fc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.020285 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj" (OuterVolumeSpecName: "kube-api-access-kbxpj") pod "5957e518-15e6-4acf-9e45-4985b7713fc8" (UID: "5957e518-15e6-4acf-9e45-4985b7713fc8"). InnerVolumeSpecName "kube-api-access-kbxpj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.046059 5130 generic.go:358] "Generic (PLEG): container finished" podID="94e12db4-0aff-472b-9bb0-82451f7e2e17" containerID="09f6d61d6c86a25345a80608865a5ab3f3bc90d93937cb4854fd17383bcdf547" exitCode=0 Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.046207 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" event={"ID":"94e12db4-0aff-472b-9bb0-82451f7e2e17","Type":"ContainerDied","Data":"09f6d61d6c86a25345a80608865a5ab3f3bc90d93937cb4854fd17383bcdf547"} Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.047744 5130 generic.go:358] "Generic (PLEG): container finished" podID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerID="5057f81cf498c9fac5a2c4b9da22d6d917e22145a2c597d5bd2c3692801c460c" exitCode=0 Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.047862 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" event={"ID":"e6c91f7f-5413-4050-bfac-93d5daa7e99f","Type":"ContainerDied","Data":"5057f81cf498c9fac5a2c4b9da22d6d917e22145a2c597d5bd2c3692801c460c"} Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.050716 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgp9n" event={"ID":"86909e43-e62d-4532-8232-aa3ca0de5d28","Type":"ContainerDied","Data":"6f2c7e4ee8005058653be608254682e6f8ccf99963c0cc49075bb88e3c4fee94"} Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.050793 5130 scope.go:117] "RemoveContainer" containerID="d92aefce9af711ed35124010d6d51b4b332460e21787d7e2fc7107e34549e8e7" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.051113 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgp9n" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.056418 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7s65" event={"ID":"5957e518-15e6-4acf-9e45-4985b7713fc8","Type":"ContainerDied","Data":"36bd50d659f1abd49597b7cae2eaed8aebe612ec36c3f9fbc5758f96ffbde8ed"} Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.056624 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7s65" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.061971 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxjp8" event={"ID":"5319f16c-f39a-4bd6-836a-cb336099dbc2","Type":"ContainerDied","Data":"ff8c45863778a48a425a28a9a87918b0efc06a9a71abddaf0a58cf0518f7b451"} Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.062142 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxjp8" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.081559 5130 scope.go:117] "RemoveContainer" containerID="04ade99dbd97ff9459f2fe6675da507000404b4f17742875047d87c9473dd0ce" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.114401 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities\") pod \"86909e43-e62d-4532-8232-aa3ca0de5d28\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.114519 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content\") pod \"86909e43-e62d-4532-8232-aa3ca0de5d28\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.114612 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9q2v\" (UniqueName: \"kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v\") pod \"86909e43-e62d-4532-8232-aa3ca0de5d28\" (UID: \"86909e43-e62d-4532-8232-aa3ca0de5d28\") " Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.115002 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.115023 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kbxpj\" (UniqueName: \"kubernetes.io/projected/5957e518-15e6-4acf-9e45-4985b7713fc8-kube-api-access-kbxpj\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.117662 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities" (OuterVolumeSpecName: "utilities") pod "86909e43-e62d-4532-8232-aa3ca0de5d28" (UID: "86909e43-e62d-4532-8232-aa3ca0de5d28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.119074 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v" (OuterVolumeSpecName: "kube-api-access-r9q2v") pod "86909e43-e62d-4532-8232-aa3ca0de5d28" (UID: "86909e43-e62d-4532-8232-aa3ca0de5d28"). InnerVolumeSpecName "kube-api-access-r9q2v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.121258 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.134267 5130 scope.go:117] "RemoveContainer" containerID="bbe21f163134a76fb060a74769ac915b36f75e2f19c37cef8c4ecf4493e03ed2" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.131170 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kxjp8"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.155023 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86909e43-e62d-4532-8232-aa3ca0de5d28" (UID: "86909e43-e62d-4532-8232-aa3ca0de5d28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.170832 5130 scope.go:117] "RemoveContainer" containerID="25ff570b6f424117e42ff1743f3a91241b1aff3d4c69db869f3bafe8dccb5cf2" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.190324 5130 scope.go:117] "RemoveContainer" containerID="8de988694708cb26378759f8d7684338d87466d2b75cc374cef28f4917599fa1" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.214968 5130 scope.go:117] "RemoveContainer" containerID="4f5f7fa1a8db052822e01db0820c2072f4c3ff8177b85e7fd8eb4cac99d50eb3" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.216356 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9q2v\" (UniqueName: \"kubernetes.io/projected/86909e43-e62d-4532-8232-aa3ca0de5d28-kube-api-access-r9q2v\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.216401 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.216418 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86909e43-e62d-4532-8232-aa3ca0de5d28-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.248421 5130 scope.go:117] "RemoveContainer" containerID="53bab943b5773cdc5239723f9ca5da10a767ae7072c2d10430133a329b0826be" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.269837 5130 scope.go:117] "RemoveContainer" containerID="c96b56d094ce6b0f2c68d90265e85262accf9682b73480f3085eb5ac9480fe0d" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.277934 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.278450 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2blsm" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="registry-server" containerID="cri-o://f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953" gracePeriod=2 Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.298670 5130 scope.go:117] "RemoveContainer" containerID="065f8ce6c69b7680313e715beb5f43833d2c2ad2a400593e9de9f40d21f7bf39" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.384738 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.387587 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgp9n"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.485972 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5957e518-15e6-4acf-9e45-4985b7713fc8" (UID: "5957e518-15e6-4acf-9e45-4985b7713fc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.520354 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5957e518-15e6-4acf-9e45-4985b7713fc8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.694372 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.697577 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7s65"] Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.989160 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:47 crc kubenswrapper[5130]: I1212 16:17:47.996778 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.022205 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023131 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023156 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023204 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerName="route-controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023212 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerName="route-controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023222 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023228 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023239 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023247 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023257 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023263 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023273 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023280 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023289 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24732491-f54a-410e-a29e-c8fb26fd9cde" containerName="pruner" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023295 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="24732491-f54a-410e-a29e-c8fb26fd9cde" containerName="pruner" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023303 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023310 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023319 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023327 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023334 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023341 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023351 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023357 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="extract-content" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023366 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023372 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023380 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023386 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="extract-utilities" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023395 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023456 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023561 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023572 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023585 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" containerName="route-controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023592 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="24732491-f54a-410e-a29e-c8fb26fd9cde" containerName="pruner" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023600 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.023608 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerName="registry-server" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.074109 5130 generic.go:358] "Generic (PLEG): container finished" podID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" containerID="f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953" exitCode=0 Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130169 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gj7n\" (UniqueName: \"kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n\") pod \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130263 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca\") pod \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130316 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config\") pod \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130335 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert\") pod \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130355 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities\") pod \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130392 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content\") pod \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\" (UID: \"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130417 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp\") pod \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.130449 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8759b\" (UniqueName: \"kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b\") pod \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\" (UID: \"e6c91f7f-5413-4050-bfac-93d5daa7e99f\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.131417 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp" (OuterVolumeSpecName: "tmp") pod "e6c91f7f-5413-4050-bfac-93d5daa7e99f" (UID: "e6c91f7f-5413-4050-bfac-93d5daa7e99f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.131493 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities" (OuterVolumeSpecName: "utilities") pod "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" (UID: "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.132020 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config" (OuterVolumeSpecName: "config") pod "e6c91f7f-5413-4050-bfac-93d5daa7e99f" (UID: "e6c91f7f-5413-4050-bfac-93d5daa7e99f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.136597 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6c91f7f-5413-4050-bfac-93d5daa7e99f" (UID: "e6c91f7f-5413-4050-bfac-93d5daa7e99f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.138082 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b" (OuterVolumeSpecName: "kube-api-access-8759b") pod "e6c91f7f-5413-4050-bfac-93d5daa7e99f" (UID: "e6c91f7f-5413-4050-bfac-93d5daa7e99f"). InnerVolumeSpecName "kube-api-access-8759b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.138227 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6c91f7f-5413-4050-bfac-93d5daa7e99f" (UID: "e6c91f7f-5413-4050-bfac-93d5daa7e99f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.138390 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n" (OuterVolumeSpecName: "kube-api-access-8gj7n") pod "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" (UID: "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f"). InnerVolumeSpecName "kube-api-access-8gj7n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.208990 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209058 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" event={"ID":"e6c91f7f-5413-4050-bfac-93d5daa7e99f","Type":"ContainerDied","Data":"8e06db9851a81391ddff393260eef28cf7e0fe05ed2c6b8c6e0a25403f2c97d7"} Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209106 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerDied","Data":"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953"} Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209114 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209139 5130 scope.go:117] "RemoveContainer" containerID="5057f81cf498c9fac5a2c4b9da22d6d917e22145a2c597d5bd2c3692801c460c" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209122 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blsm" event={"ID":"fb3b2430-d128-4d2d-9518-6be0ca0ddc6f","Type":"ContainerDied","Data":"4e9f04b1e852fa9141933d1eca7d926563f8ad649e9315eff76350ec836adf3d"} Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.209959 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.212895 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blsm" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.216809 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.216918 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.217111 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.217246 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.217279 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.217528 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.225382 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" (UID: "fb3b2430-d128-4d2d-9518-6be0ca0ddc6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231835 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gj7n\" (UniqueName: \"kubernetes.io/projected/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-kube-api-access-8gj7n\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231873 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231887 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c91f7f-5413-4050-bfac-93d5daa7e99f-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231897 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6c91f7f-5413-4050-bfac-93d5daa7e99f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231909 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231921 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231932 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e6c91f7f-5413-4050-bfac-93d5daa7e99f-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.231945 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8759b\" (UniqueName: \"kubernetes.io/projected/e6c91f7f-5413-4050-bfac-93d5daa7e99f-kube-api-access-8759b\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.251036 5130 scope.go:117] "RemoveContainer" containerID="f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.265156 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.269629 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4599bd79-7rg9b"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.273877 5130 scope.go:117] "RemoveContainer" containerID="90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.277112 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.299646 5130 scope.go:117] "RemoveContainer" containerID="3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.315538 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.316088 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94e12db4-0aff-472b-9bb0-82451f7e2e17" containerName="controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.316273 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e12db4-0aff-472b-9bb0-82451f7e2e17" containerName="controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.316385 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="94e12db4-0aff-472b-9bb0-82451f7e2e17" containerName="controller-manager" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.332684 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.332781 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.332806 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlv4\" (UniqueName: \"kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.332852 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.332874 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.346988 5130 scope.go:117] "RemoveContainer" containerID="f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953" Dec 12 16:17:48 crc kubenswrapper[5130]: E1212 16:17:48.347611 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953\": container with ID starting with f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953 not found: ID does not exist" containerID="f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.347661 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953"} err="failed to get container status \"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953\": rpc error: code = NotFound desc = could not find container \"f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953\": container with ID starting with f36c8627266367ea2e2222af786eb317dffcb5d7f0cf3db9ff6eddd2263fa953 not found: ID does not exist" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.347709 5130 scope.go:117] "RemoveContainer" containerID="90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673" Dec 12 16:17:48 crc kubenswrapper[5130]: E1212 16:17:48.347979 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673\": container with ID starting with 90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673 not found: ID does not exist" containerID="90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.348007 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673"} err="failed to get container status \"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673\": rpc error: code = NotFound desc = could not find container \"90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673\": container with ID starting with 90b962f54889ccc4438518c72174f19009f85965e3d1732e9ebb6a3b2ebe8673 not found: ID does not exist" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.348024 5130 scope.go:117] "RemoveContainer" containerID="3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147" Dec 12 16:17:48 crc kubenswrapper[5130]: E1212 16:17:48.348868 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147\": container with ID starting with 3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147 not found: ID does not exist" containerID="3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.348932 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147"} err="failed to get container status \"3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147\": rpc error: code = NotFound desc = could not find container \"3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147\": container with ID starting with 3ee00a3473441a6ac4512641dd065383db13c63e8d72ccab3b77b3a4ab459147 not found: ID does not exist" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.433681 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.433758 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.433803 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.433901 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2d5\" (UniqueName: \"kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.433988 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434075 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config\") pod \"94e12db4-0aff-472b-9bb0-82451f7e2e17\" (UID: \"94e12db4-0aff-472b-9bb0-82451f7e2e17\") " Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434260 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434300 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlv4\" (UniqueName: \"kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434370 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp" (OuterVolumeSpecName: "tmp") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434559 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434638 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434864 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434871 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.434942 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config" (OuterVolumeSpecName: "config") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435095 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435238 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435252 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435263 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94e12db4-0aff-472b-9bb0-82451f7e2e17-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435396 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca" (OuterVolumeSpecName: "client-ca") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.435468 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.436026 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.442086 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5" (OuterVolumeSpecName: "kube-api-access-jw2d5") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "kube-api-access-jw2d5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.442336 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94e12db4-0aff-472b-9bb0-82451f7e2e17" (UID: "94e12db4-0aff-472b-9bb0-82451f7e2e17"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.454764 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.456877 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlv4\" (UniqueName: \"kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4\") pod \"route-controller-manager-6b47f77689-5r77s\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.540701 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jw2d5\" (UniqueName: \"kubernetes.io/projected/94e12db4-0aff-472b-9bb0-82451f7e2e17-kube-api-access-jw2d5\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.540747 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e12db4-0aff-472b-9bb0-82451f7e2e17-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.540757 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e12db4-0aff-472b-9bb0-82451f7e2e17-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.565001 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:48 crc kubenswrapper[5130]: W1212 16:17:48.790168 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f01c145_aa38_45ce_bd88_2ec20e5b6b01.slice/crio-609e288f6150383689e6e4701e91eb57b7f4ba8565dc180e325d258aabd97881 WatchSource:0}: Error finding container 609e288f6150383689e6e4701e91eb57b7f4ba8565dc180e325d258aabd97881: Status 404 returned error can't find the container with id 609e288f6150383689e6e4701e91eb57b7f4ba8565dc180e325d258aabd97881 Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.931872 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.931941 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.931967 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2blsm"] Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.932018 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.978662 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5319f16c-f39a-4bd6-836a-cb336099dbc2" path="/var/lib/kubelet/pods/5319f16c-f39a-4bd6-836a-cb336099dbc2/volumes" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.979755 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5957e518-15e6-4acf-9e45-4985b7713fc8" path="/var/lib/kubelet/pods/5957e518-15e6-4acf-9e45-4985b7713fc8/volumes" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.980398 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86909e43-e62d-4532-8232-aa3ca0de5d28" path="/var/lib/kubelet/pods/86909e43-e62d-4532-8232-aa3ca0de5d28/volumes" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.981629 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c91f7f-5413-4050-bfac-93d5daa7e99f" path="/var/lib/kubelet/pods/e6c91f7f-5413-4050-bfac-93d5daa7e99f/volumes" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.982326 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3b2430-d128-4d2d-9518-6be0ca0ddc6f" path="/var/lib/kubelet/pods/fb3b2430-d128-4d2d-9518-6be0ca0ddc6f/volumes" Dec 12 16:17:48 crc kubenswrapper[5130]: I1212 16:17:48.983728 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051044 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051113 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051171 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051425 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxkr\" (UniqueName: \"kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051545 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.051599 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.083845 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" event={"ID":"1f01c145-aa38-45ce-bd88-2ec20e5b6b01","Type":"ContainerStarted","Data":"609e288f6150383689e6e4701e91eb57b7f4ba8565dc180e325d258aabd97881"} Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.087942 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" event={"ID":"94e12db4-0aff-472b-9bb0-82451f7e2e17","Type":"ContainerDied","Data":"a91e75a5dac6930aac28aa81157a93d650d81215f1bbe01d548fac770f1d603f"} Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.088032 5130 scope.go:117] "RemoveContainer" containerID="09f6d61d6c86a25345a80608865a5ab3f3bc90d93937cb4854fd17383bcdf547" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.088269 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f958c846-qd8rg" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.112927 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.116747 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69f958c846-qd8rg"] Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153237 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsxkr\" (UniqueName: \"kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153324 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153378 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153416 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153465 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.153505 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.154276 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.155690 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.158274 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.158381 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.165453 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.180173 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsxkr\" (UniqueName: \"kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr\") pod \"controller-manager-6445bd5bb7-qhd4b\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.260002 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:49 crc kubenswrapper[5130]: I1212 16:17:49.500870 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.097542 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" event={"ID":"1f01c145-aa38-45ce-bd88-2ec20e5b6b01","Type":"ContainerStarted","Data":"52f9ea62d5901f63c8acd887ee6ead2524c70dcf113a5059f17f8954638ae9ee"} Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.097883 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.099021 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" event={"ID":"0ce08791-98bd-44a9-8d91-e27aefc67c18","Type":"ContainerStarted","Data":"1332c262f4c6fcfb2e0d40005d777264a84fe92d08a2834dfc7f42a405575944"} Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.099048 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" event={"ID":"0ce08791-98bd-44a9-8d91-e27aefc67c18","Type":"ContainerStarted","Data":"5f7de056136ddaf6c387370bd5cf72cf4ec9d929b91af02b0dcd7c0aceeb020b"} Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.099460 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.103773 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.106059 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.117607 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" podStartSLOduration=4.117583049 podStartE2EDuration="4.117583049s" podCreationTimestamp="2025-12-12 16:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:50.114034351 +0000 UTC m=+170.011709183" watchObservedRunningTime="2025-12-12 16:17:50.117583049 +0000 UTC m=+170.015257881" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.135469 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" podStartSLOduration=4.13544531 podStartE2EDuration="4.13544531s" podCreationTimestamp="2025-12-12 16:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:17:50.13057985 +0000 UTC m=+170.028254702" watchObservedRunningTime="2025-12-12 16:17:50.13544531 +0000 UTC m=+170.033120142" Dec 12 16:17:50 crc kubenswrapper[5130]: I1212 16:17:50.379827 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e12db4-0aff-472b-9bb0-82451f7e2e17" path="/var/lib/kubelet/pods/94e12db4-0aff-472b-9bb0-82451f7e2e17/volumes" Dec 12 16:18:00 crc kubenswrapper[5130]: I1212 16:18:00.895795 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-brfdj"] Dec 12 16:18:04 crc kubenswrapper[5130]: I1212 16:18:04.567675 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38522: no serving certificate available for the kubelet" Dec 12 16:18:06 crc kubenswrapper[5130]: I1212 16:18:06.463371 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:18:06 crc kubenswrapper[5130]: I1212 16:18:06.463697 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" podUID="0ce08791-98bd-44a9-8d91-e27aefc67c18" containerName="controller-manager" containerID="cri-o://1332c262f4c6fcfb2e0d40005d777264a84fe92d08a2834dfc7f42a405575944" gracePeriod=30 Dec 12 16:18:06 crc kubenswrapper[5130]: I1212 16:18:06.477841 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:18:06 crc kubenswrapper[5130]: I1212 16:18:06.478198 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" podUID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" containerName="route-controller-manager" containerID="cri-o://52f9ea62d5901f63c8acd887ee6ead2524c70dcf113a5059f17f8954638ae9ee" gracePeriod=30 Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.207941 5130 generic.go:358] "Generic (PLEG): container finished" podID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" containerID="52f9ea62d5901f63c8acd887ee6ead2524c70dcf113a5059f17f8954638ae9ee" exitCode=0 Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.208032 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" event={"ID":"1f01c145-aa38-45ce-bd88-2ec20e5b6b01","Type":"ContainerDied","Data":"52f9ea62d5901f63c8acd887ee6ead2524c70dcf113a5059f17f8954638ae9ee"} Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.211058 5130 generic.go:358] "Generic (PLEG): container finished" podID="0ce08791-98bd-44a9-8d91-e27aefc67c18" containerID="1332c262f4c6fcfb2e0d40005d777264a84fe92d08a2834dfc7f42a405575944" exitCode=0 Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.211112 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" event={"ID":"0ce08791-98bd-44a9-8d91-e27aefc67c18","Type":"ContainerDied","Data":"1332c262f4c6fcfb2e0d40005d777264a84fe92d08a2834dfc7f42a405575944"} Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.447636 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.485890 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.486892 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" containerName="route-controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.486909 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" containerName="route-controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.487023 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" containerName="route-controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.491269 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.501059 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.552980 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert\") pod \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.553089 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rlv4\" (UniqueName: \"kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4\") pod \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.553154 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config\") pod \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.553355 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca\") pod \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.554495 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config" (OuterVolumeSpecName: "config") pod "1f01c145-aa38-45ce-bd88-2ec20e5b6b01" (UID: "1f01c145-aa38-45ce-bd88-2ec20e5b6b01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.554690 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f01c145-aa38-45ce-bd88-2ec20e5b6b01" (UID: "1f01c145-aa38-45ce-bd88-2ec20e5b6b01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.554788 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp\") pod \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\" (UID: \"1f01c145-aa38-45ce-bd88-2ec20e5b6b01\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.555371 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp" (OuterVolumeSpecName: "tmp") pod "1f01c145-aa38-45ce-bd88-2ec20e5b6b01" (UID: "1f01c145-aa38-45ce-bd88-2ec20e5b6b01"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.558123 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.558154 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.558164 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.562263 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4" (OuterVolumeSpecName: "kube-api-access-5rlv4") pod "1f01c145-aa38-45ce-bd88-2ec20e5b6b01" (UID: "1f01c145-aa38-45ce-bd88-2ec20e5b6b01"). InnerVolumeSpecName "kube-api-access-5rlv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.563362 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f01c145-aa38-45ce-bd88-2ec20e5b6b01" (UID: "1f01c145-aa38-45ce-bd88-2ec20e5b6b01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660341 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660413 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660463 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj7np\" (UniqueName: \"kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660565 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660666 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660942 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rlv4\" (UniqueName: \"kubernetes.io/projected/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-kube-api-access-5rlv4\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.660963 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f01c145-aa38-45ce-bd88-2ec20e5b6b01-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.722976 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.754012 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.754727 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ce08791-98bd-44a9-8d91-e27aefc67c18" containerName="controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.754750 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce08791-98bd-44a9-8d91-e27aefc67c18" containerName="controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.754890 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ce08791-98bd-44a9-8d91-e27aefc67c18" containerName="controller-manager" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.761554 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsxkr\" (UniqueName: \"kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.761692 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762015 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762118 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762143 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762168 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles\") pod \"0ce08791-98bd-44a9-8d91-e27aefc67c18\" (UID: \"0ce08791-98bd-44a9-8d91-e27aefc67c18\") " Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762341 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762380 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762600 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nj7np\" (UniqueName: \"kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.762671 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp" (OuterVolumeSpecName: "tmp") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.763025 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.763052 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.763358 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.763551 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0ce08791-98bd-44a9-8d91-e27aefc67c18-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.763548 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.764115 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config" (OuterVolumeSpecName: "config") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.764312 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.764473 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.764941 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.766959 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.766958 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr" (OuterVolumeSpecName: "kube-api-access-rsxkr") pod "0ce08791-98bd-44a9-8d91-e27aefc67c18" (UID: "0ce08791-98bd-44a9-8d91-e27aefc67c18"). InnerVolumeSpecName "kube-api-access-rsxkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.776892 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.785787 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.786338 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.800091 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj7np\" (UniqueName: \"kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np\") pod \"route-controller-manager-67bd47cff9-br6nz\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.807519 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.864692 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.865379 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.865416 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.865445 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.865472 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.865501 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8bh\" (UniqueName: \"kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.866139 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.866353 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.866373 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce08791-98bd-44a9-8d91-e27aefc67c18-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.866386 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ce08791-98bd-44a9-8d91-e27aefc67c18-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.866409 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsxkr\" (UniqueName: \"kubernetes.io/projected/0ce08791-98bd-44a9-8d91-e27aefc67c18-kube-api-access-rsxkr\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968238 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968295 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968320 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968547 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968663 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8bh\" (UniqueName: \"kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.968991 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.969233 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.969703 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.970406 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.970558 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.982921 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:07 crc kubenswrapper[5130]: I1212 16:18:07.987101 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8bh\" (UniqueName: \"kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh\") pod \"controller-manager-7fffb5779-6br5z\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.129473 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.210606 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:18:08 crc kubenswrapper[5130]: W1212 16:18:08.228116 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3af7089_05b2_4dcb_947b_3dd784d92815.slice/crio-2d3dc2744fa5b4ed8734404b5f41bcb8d9a837bba2ffffb3ba8c6a4da8a52f1e WatchSource:0}: Error finding container 2d3dc2744fa5b4ed8734404b5f41bcb8d9a837bba2ffffb3ba8c6a4da8a52f1e: Status 404 returned error can't find the container with id 2d3dc2744fa5b4ed8734404b5f41bcb8d9a837bba2ffffb3ba8c6a4da8a52f1e Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.231103 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" event={"ID":"0ce08791-98bd-44a9-8d91-e27aefc67c18","Type":"ContainerDied","Data":"5f7de056136ddaf6c387370bd5cf72cf4ec9d929b91af02b0dcd7c0aceeb020b"} Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.231118 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.231192 5130 scope.go:117] "RemoveContainer" containerID="1332c262f4c6fcfb2e0d40005d777264a84fe92d08a2834dfc7f42a405575944" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.234948 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" event={"ID":"1f01c145-aa38-45ce-bd88-2ec20e5b6b01","Type":"ContainerDied","Data":"609e288f6150383689e6e4701e91eb57b7f4ba8565dc180e325d258aabd97881"} Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.235096 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.251853 5130 scope.go:117] "RemoveContainer" containerID="52f9ea62d5901f63c8acd887ee6ead2524c70dcf113a5059f17f8954638ae9ee" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.278002 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.280772 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6445bd5bb7-qhd4b"] Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.290688 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.294043 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b47f77689-5r77s"] Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.378511 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce08791-98bd-44a9-8d91-e27aefc67c18" path="/var/lib/kubelet/pods/0ce08791-98bd-44a9-8d91-e27aefc67c18/volumes" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.379972 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f01c145-aa38-45ce-bd88-2ec20e5b6b01" path="/var/lib/kubelet/pods/1f01c145-aa38-45ce-bd88-2ec20e5b6b01/volumes" Dec 12 16:18:08 crc kubenswrapper[5130]: I1212 16:18:08.539699 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:18:08 crc kubenswrapper[5130]: W1212 16:18:08.547738 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2944f3c_2b29_4f86_8a67_59975d09aa88.slice/crio-0f2c183e8f515b2f190e8930f24422ca27de87c62982b51512617516d3516532 WatchSource:0}: Error finding container 0f2c183e8f515b2f190e8930f24422ca27de87c62982b51512617516d3516532: Status 404 returned error can't find the container with id 0f2c183e8f515b2f190e8930f24422ca27de87c62982b51512617516d3516532 Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.242545 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" event={"ID":"b2944f3c-2b29-4f86-8a67-59975d09aa88","Type":"ContainerStarted","Data":"77fa94161c98b2b46b52329d1614a29da8d3a632559d23d1ee3160ddf4efb64d"} Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.242591 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" event={"ID":"b2944f3c-2b29-4f86-8a67-59975d09aa88","Type":"ContainerStarted","Data":"0f2c183e8f515b2f190e8930f24422ca27de87c62982b51512617516d3516532"} Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.242848 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.244421 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" event={"ID":"a3af7089-05b2-4dcb-947b-3dd784d92815","Type":"ContainerStarted","Data":"56879013dbea75eed3d81b6a2b798969c454d33b231e29382b429fb91de7bab6"} Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.244455 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" event={"ID":"a3af7089-05b2-4dcb-947b-3dd784d92815","Type":"ContainerStarted","Data":"2d3dc2744fa5b4ed8734404b5f41bcb8d9a837bba2ffffb3ba8c6a4da8a52f1e"} Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.244842 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.261530 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" podStartSLOduration=3.261511216 podStartE2EDuration="3.261511216s" podCreationTimestamp="2025-12-12 16:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:09.261301441 +0000 UTC m=+189.158976283" watchObservedRunningTime="2025-12-12 16:18:09.261511216 +0000 UTC m=+189.159186048" Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.280978 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" podStartSLOduration=3.280910296 podStartE2EDuration="3.280910296s" podCreationTimestamp="2025-12-12 16:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:09.280777322 +0000 UTC m=+189.178452174" watchObservedRunningTime="2025-12-12 16:18:09.280910296 +0000 UTC m=+189.178585168" Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.342057 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:18:09 crc kubenswrapper[5130]: I1212 16:18:09.574244 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.820723 5130 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.821510 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://f1a01912ddee091b284981f73500faf3fcfd7a1071596baf5cd12e42fadf2802" gracePeriod=15 Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.821574 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fb358025eb77871c75cb9b40f8c7bc36aebb9927910b33781e814fb8ac191a85" gracePeriod=15 Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.821583 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3f84b80c2f32e68a8eb79916fece466ce160a92d4d9b989d1bfd37673b951c48" gracePeriod=15 Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.821719 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://96c12daa01120f19be833f82d5f8c18b27d7dc4c74ac5543dd248efa1a9301d1" gracePeriod=15 Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.821778 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://818cbab9fa2109ab2203469a2d7999f6b39f7f70722424aa9e78038d779eb741" gracePeriod=15 Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.823307 5130 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824015 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824032 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824043 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824052 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824063 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824069 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824082 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824088 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824101 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824108 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824121 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824128 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824141 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824147 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824155 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824161 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824194 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824202 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824212 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824220 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824330 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824340 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824348 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824356 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824364 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824373 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824385 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824604 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.824614 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.917423 5130 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.938869 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.961825 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.961891 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.961986 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.962011 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.962054 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: I1212 16:18:13.989743 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:13 crc kubenswrapper[5130]: E1212 16:18:13.991335 5130 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063455 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063568 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063675 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063681 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063695 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063806 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063828 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.063992 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.064028 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.064068 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.064238 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.064342 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.064405 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.065152 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.065454 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165213 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165292 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165472 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165568 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165598 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165626 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165662 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165671 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165696 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.165764 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.276621 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.278492 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.279703 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fb358025eb77871c75cb9b40f8c7bc36aebb9927910b33781e814fb8ac191a85" exitCode=0 Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.279749 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="96c12daa01120f19be833f82d5f8c18b27d7dc4c74ac5543dd248efa1a9301d1" exitCode=0 Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.279757 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3f84b80c2f32e68a8eb79916fece466ce160a92d4d9b989d1bfd37673b951c48" exitCode=0 Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.279766 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="818cbab9fa2109ab2203469a2d7999f6b39f7f70722424aa9e78038d779eb741" exitCode=2 Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.279833 5130 scope.go:117] "RemoveContainer" containerID="ad11549986f023f63b3e65c6e3b693d4238cce60749fd223f369f42b94870dca" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.282321 5130 generic.go:358] "Generic (PLEG): container finished" podID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" containerID="7eea8ddfdf2799e96a4d403b19f067a1e7d06758be2fb080a0c405d345d4b8b4" exitCode=0 Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.282405 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"214aeed8-f6a2-4251-b4d0-c81fd217c7c2","Type":"ContainerDied","Data":"7eea8ddfdf2799e96a4d403b19f067a1e7d06758be2fb080a0c405d345d4b8b4"} Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.292969 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.348960 5130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188084186b8fb32c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,LastTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.579135 5130 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.579940 5130 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.580317 5130 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.580818 5130 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.581472 5130 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:14 crc kubenswrapper[5130]: I1212 16:18:14.581514 5130 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.581797 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Dec 12 16:18:14 crc kubenswrapper[5130]: E1212 16:18:14.782709 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Dec 12 16:18:15 crc kubenswrapper[5130]: E1212 16:18:15.183554 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.294236 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157"} Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.294349 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"a2c82fdf2462bfbd8ecd3a16f36881930d03151045c8baa16884ca4e3c315e21"} Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.294797 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:15 crc kubenswrapper[5130]: E1212 16:18:15.295501 5130 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.298326 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.557928 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.695915 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access\") pod \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.695984 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir\") pod \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.696150 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "214aeed8-f6a2-4251-b4d0-c81fd217c7c2" (UID: "214aeed8-f6a2-4251-b4d0-c81fd217c7c2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.696761 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock\") pod \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\" (UID: \"214aeed8-f6a2-4251-b4d0-c81fd217c7c2\") " Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.697323 5130 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.697360 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock" (OuterVolumeSpecName: "var-lock") pod "214aeed8-f6a2-4251-b4d0-c81fd217c7c2" (UID: "214aeed8-f6a2-4251-b4d0-c81fd217c7c2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.702992 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "214aeed8-f6a2-4251-b4d0-c81fd217c7c2" (UID: "214aeed8-f6a2-4251-b4d0-c81fd217c7c2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.798752 5130 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:15 crc kubenswrapper[5130]: I1212 16:18:15.798789 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/214aeed8-f6a2-4251-b4d0-c81fd217c7c2-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:15 crc kubenswrapper[5130]: E1212 16:18:15.985790 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.313354 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.314676 5130 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f1a01912ddee091b284981f73500faf3fcfd7a1071596baf5cd12e42fadf2802" exitCode=0 Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.314810 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f8b3d9b37f8e15c6b95bd9ac6402ce9fc5bdd3698114a49ae52ab1391ea885" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.317310 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"214aeed8-f6a2-4251-b4d0-c81fd217c7c2","Type":"ContainerDied","Data":"e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d"} Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.317360 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a0bf2b155dc14ff07a59baf202683f9cd8e1f0c8d1a97324c66ce16b92ed3d" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.317485 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.319767 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.320527 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407243 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407371 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407389 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407522 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407669 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407704 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407867 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.407973 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.408460 5130 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.408487 5130 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.408504 5130 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.408531 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.412107 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.510009 5130 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:16 crc kubenswrapper[5130]: I1212 16:18:16.510077 5130 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:17 crc kubenswrapper[5130]: I1212 16:18:17.324598 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:17 crc kubenswrapper[5130]: E1212 16:18:17.587107 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Dec 12 16:18:17 crc kubenswrapper[5130]: E1212 16:18:17.708446 5130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188084186b8fb32c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,LastTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:18 crc kubenswrapper[5130]: I1212 16:18:18.377809 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 12 16:18:18 crc kubenswrapper[5130]: I1212 16:18:18.944295 5130 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:18 crc kubenswrapper[5130]: I1212 16:18:18.945098 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:20 crc kubenswrapper[5130]: I1212 16:18:20.376443 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:20 crc kubenswrapper[5130]: E1212 16:18:20.395671 5130 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" volumeName="registry-storage" Dec 12 16:18:20 crc kubenswrapper[5130]: E1212 16:18:20.788023 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="6.4s" Dec 12 16:18:22 crc kubenswrapper[5130]: I1212 16:18:22.730166 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:18:22 crc kubenswrapper[5130]: I1212 16:18:22.730675 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.195860 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerName="oauth-openshift" containerID="cri-o://fd9d1e6fffa4e7035ed54facdeb72536d22a2dfeeb29ad14637caee2b9df5255" gracePeriod=15 Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.378188 5130 generic.go:358] "Generic (PLEG): container finished" podID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerID="fd9d1e6fffa4e7035ed54facdeb72536d22a2dfeeb29ad14637caee2b9df5255" exitCode=0 Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.378264 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" event={"ID":"e13eeec0-72dd-418b-9180-87ca0d56870d","Type":"ContainerDied","Data":"fd9d1e6fffa4e7035ed54facdeb72536d22a2dfeeb29ad14637caee2b9df5255"} Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.831797 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.832847 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.833409 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867039 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8kx\" (UniqueName: \"kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867328 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867469 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867578 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867668 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867742 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867819 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.867944 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868095 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868225 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868365 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868506 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868615 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868697 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies\") pod \"e13eeec0-72dd-418b-9180-87ca0d56870d\" (UID: \"e13eeec0-72dd-418b-9180-87ca0d56870d\") " Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868775 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.868837 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.869376 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.869574 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.870309 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.874376 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.874728 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.874890 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx" (OuterVolumeSpecName: "kube-api-access-qz8kx") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "kube-api-access-qz8kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.874894 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.875288 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.875519 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.875682 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.875890 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.876338 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e13eeec0-72dd-418b-9180-87ca0d56870d" (UID: "e13eeec0-72dd-418b-9180-87ca0d56870d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970159 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970212 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970222 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970231 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970239 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970250 5130 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970261 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qz8kx\" (UniqueName: \"kubernetes.io/projected/e13eeec0-72dd-418b-9180-87ca0d56870d-kube-api-access-qz8kx\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970271 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970281 5130 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e13eeec0-72dd-418b-9180-87ca0d56870d-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970290 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970300 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970309 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970318 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:26 crc kubenswrapper[5130]: I1212 16:18:26.970326 5130 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e13eeec0-72dd-418b-9180-87ca0d56870d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 12 16:18:27 crc kubenswrapper[5130]: E1212 16:18:27.189813 5130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="7s" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.369387 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.370975 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.373468 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.385392 5130 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.385433 5130 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:27 crc kubenswrapper[5130]: E1212 16:18:27.385979 5130 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.387027 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.387948 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" event={"ID":"e13eeec0-72dd-418b-9180-87ca0d56870d","Type":"ContainerDied","Data":"63d4f7893d2a6e51680e692730931a8e2db49032b3b5feb5b320f7d42af3e4ba"} Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.388031 5130 scope.go:117] "RemoveContainer" containerID="fd9d1e6fffa4e7035ed54facdeb72536d22a2dfeeb29ad14637caee2b9df5255" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.388291 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.390024 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.390596 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.392025 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.392063 5130 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="d06abcd904ffab6d0e7ef275a88cc4d48ca01cbaf45c12b67e4ce3961c69e34f" exitCode=1 Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.392291 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"d06abcd904ffab6d0e7ef275a88cc4d48ca01cbaf45c12b67e4ce3961c69e34f"} Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.392727 5130 scope.go:117] "RemoveContainer" containerID="d06abcd904ffab6d0e7ef275a88cc4d48ca01cbaf45c12b67e4ce3961c69e34f" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.393661 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.393885 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.394097 5130 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.471132 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.471738 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: I1212 16:18:27.472273 5130 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:27 crc kubenswrapper[5130]: E1212 16:18:27.709749 5130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188084186b8fb32c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,LastTimestamp:2025-12-12 16:18:14.347895596 +0000 UTC m=+194.245570428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.400637 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.400765 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"00a484f629ca90f6ab3df3705c452933a22d3045f690c119fc05232caa7eaafd"} Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402238 5130 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="5759ea98e614dfbacad1abb83520121637fcd085af1d8c72edbeb9cfcb4a2d82" exitCode=0 Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402277 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"5759ea98e614dfbacad1abb83520121637fcd085af1d8c72edbeb9cfcb4a2d82"} Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402340 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ba23c72e5083938fc6a0a8c801f95e0392a5743c3ec5c54b0934e0ee4c2b5dc1"} Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402225 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402791 5130 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402818 5130 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.402831 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.403220 5130 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[5130]: E1212 16:18:28.403409 5130 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.403537 5130 status_manager.go:895] "Failed to get status for pod" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" pod="openshift-authentication/oauth-openshift-66458b6674-brfdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-brfdj\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.403767 5130 status_manager.go:895] "Failed to get status for pod" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:28 crc kubenswrapper[5130]: I1212 16:18:28.404034 5130 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Dec 12 16:18:29 crc kubenswrapper[5130]: I1212 16:18:29.411001 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3acb395228414c55b65cbbdfa73b680dec4bff5db30e6692fb22b18acd9b3f4a"} Dec 12 16:18:29 crc kubenswrapper[5130]: I1212 16:18:29.411044 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"05d0c55c1d26f187664b2c218d2c1616fccd533610cedc6c451e9324518da75b"} Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.422695 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"734b34ba7c0e56ef2893c4ad4acec2df448815157e9251745124dea4bba0318a"} Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.423008 5130 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.423023 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.423032 5130 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.423037 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5bdd985122ed28132e468bcafcf4fc63ed14dd4d7aca8c345386473cd98161e0"} Dec 12 16:18:30 crc kubenswrapper[5130]: I1212 16:18:30.423065 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"92d85aca035c275b7744334d09251c16553af2c9921acb54cff7a769f33f08d0"} Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.387687 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.387929 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.394309 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.971360 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.971634 5130 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 16:18:32 crc kubenswrapper[5130]: I1212 16:18:32.971837 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 16:18:33 crc kubenswrapper[5130]: I1212 16:18:33.463020 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:18:35 crc kubenswrapper[5130]: I1212 16:18:35.436568 5130 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:35 crc kubenswrapper[5130]: I1212 16:18:35.436937 5130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:35 crc kubenswrapper[5130]: I1212 16:18:35.511021 5130 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4a0e1cf7-532c-4048-93b0-0d3177458ab7" Dec 12 16:18:36 crc kubenswrapper[5130]: I1212 16:18:36.458859 5130 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:36 crc kubenswrapper[5130]: I1212 16:18:36.458891 5130 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:36 crc kubenswrapper[5130]: I1212 16:18:36.466323 5130 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4a0e1cf7-532c-4048-93b0-0d3177458ab7" Dec 12 16:18:36 crc kubenswrapper[5130]: I1212 16:18:36.467332 5130 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://05d0c55c1d26f187664b2c218d2c1616fccd533610cedc6c451e9324518da75b" Dec 12 16:18:36 crc kubenswrapper[5130]: I1212 16:18:36.467362 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:37 crc kubenswrapper[5130]: I1212 16:18:37.464757 5130 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:37 crc kubenswrapper[5130]: I1212 16:18:37.464796 5130 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="603ad635-456e-4bd9-9aba-9f5882cf0440" Dec 12 16:18:37 crc kubenswrapper[5130]: I1212 16:18:37.469248 5130 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="4a0e1cf7-532c-4048-93b0-0d3177458ab7" Dec 12 16:18:42 crc kubenswrapper[5130]: I1212 16:18:42.972269 5130 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 12 16:18:42 crc kubenswrapper[5130]: I1212 16:18:42.972929 5130 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 12 16:18:45 crc kubenswrapper[5130]: I1212 16:18:45.505724 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.100604 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.107063 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.152533 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.174150 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.202910 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.267974 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.633703 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.684523 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.870996 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.873450 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.887437 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.900468 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.921164 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.948694 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:18:46 crc kubenswrapper[5130]: I1212 16:18:46.965203 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.031130 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.331486 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.334518 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.366665 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.422653 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.527238 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.647349 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.860859 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.966611 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 12 16:18:47 crc kubenswrapper[5130]: I1212 16:18:47.990611 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.103424 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.206689 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.307916 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.437309 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.476696 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.497749 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.597913 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.626529 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.689403 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.759269 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.805011 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.805523 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.894357 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.904946 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 12 16:18:48 crc kubenswrapper[5130]: I1212 16:18:48.979932 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.202911 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.272801 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.293555 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.302737 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.346754 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.420714 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.475004 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.490453 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.791234 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.832049 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.865943 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.895593 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.919575 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.920225 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 12 16:18:49 crc kubenswrapper[5130]: I1212 16:18:49.971405 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.014347 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.037618 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.081507 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.118960 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.141608 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.204416 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.290266 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.412274 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.455130 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.461401 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.482716 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.489595 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.489714 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.493878 5130 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.500250 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-brfdj"] Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.500353 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.508436 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.521875 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.521860275 podStartE2EDuration="15.521860275s" podCreationTimestamp="2025-12-12 16:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:50.521761623 +0000 UTC m=+230.419436455" watchObservedRunningTime="2025-12-12 16:18:50.521860275 +0000 UTC m=+230.419535107" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.558227 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.586900 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.817703 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.866504 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.937531 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 12 16:18:50 crc kubenswrapper[5130]: I1212 16:18:50.982077 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.011693 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.043064 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.188655 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.259096 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.408697 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.464054 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.527860 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.639780 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.666723 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.740434 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.930605 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.939872 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 12 16:18:51 crc kubenswrapper[5130]: I1212 16:18:51.996511 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.008734 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.035768 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr"] Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036404 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerName="oauth-openshift" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036427 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerName="oauth-openshift" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036450 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" containerName="installer" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036458 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" containerName="installer" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036580 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="214aeed8-f6a2-4251-b4d0-c81fd217c7c2" containerName="installer" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.036588 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" containerName="oauth-openshift" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.053538 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.057463 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.057831 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.058051 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.059142 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.059401 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.059675 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.059746 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.059951 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.060419 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.060491 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.060537 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.060629 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.063406 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.069895 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.075093 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.106233 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115154 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-session\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115265 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-policies\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115291 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsg77\" (UniqueName: \"kubernetes.io/projected/5b0a332f-52bd-409b-b5c0-f2723c617bed-kube-api-access-qsg77\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115311 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115331 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115357 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-error\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115451 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115488 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115512 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115532 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115701 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115777 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-login\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115903 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-dir\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.115944 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.182682 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.217692 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.217765 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.217787 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-login\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218036 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-dir\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218127 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218205 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-session\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218325 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-policies\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218373 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qsg77\" (UniqueName: \"kubernetes.io/projected/5b0a332f-52bd-409b-b5c0-f2723c617bed-kube-api-access-qsg77\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218403 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218432 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218490 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-error\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218525 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218599 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.218658 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.219494 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.219585 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.219734 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-dir\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.220497 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-audit-policies\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.220672 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.225594 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-session\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.225690 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.226105 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.226136 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.227259 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-error\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.228854 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-login\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.229243 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.229700 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b0a332f-52bd-409b-b5c0-f2723c617bed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.241586 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsg77\" (UniqueName: \"kubernetes.io/projected/5b0a332f-52bd-409b-b5c0-f2723c617bed-kube-api-access-qsg77\") pod \"oauth-openshift-6567f5ffdb-jrpfr\" (UID: \"5b0a332f-52bd-409b-b5c0-f2723c617bed\") " pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.243908 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.309138 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.340655 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.346863 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.375049 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.384888 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13eeec0-72dd-418b-9180-87ca0d56870d" path="/var/lib/kubelet/pods/e13eeec0-72dd-418b-9180-87ca0d56870d/volumes" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.473252 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.533432 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.591502 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.607736 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.679505 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.699073 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.710092 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.724434 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.726088 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.727422 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.730045 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.730220 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.743522 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.853254 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.975982 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:18:52 crc kubenswrapper[5130]: I1212 16:18:52.982236 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.036073 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.328201 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.518746 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.567311 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.567311 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.591962 5130 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.684239 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.701106 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.703784 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.714493 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.720561 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.741951 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.772421 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.773248 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.787996 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.819527 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.894064 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.896415 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 12 16:18:53 crc kubenswrapper[5130]: I1212 16:18:53.976077 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.007113 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.042709 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.051563 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.136126 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.238882 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.321120 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.330444 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.344052 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.412775 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.430041 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.457610 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.505860 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.522613 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.629256 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.650526 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.669140 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.670060 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.714766 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.714827 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.727456 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.752323 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.759581 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.812147 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.833983 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.875679 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:54 crc kubenswrapper[5130]: I1212 16:18:54.964880 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.079613 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.255629 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.286318 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.308546 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.463509 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.641384 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.755671 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 12 16:18:55 crc kubenswrapper[5130]: I1212 16:18:55.921477 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.005861 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.072345 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.085317 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.125344 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.128348 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.141548 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.261603 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.307985 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.380166 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.386551 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.486454 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.588381 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.656110 5130 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.755864 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.770769 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.820742 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.850350 5130 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.850627 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157" gracePeriod=5 Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.869191 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.879273 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.883899 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.923828 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:56 crc kubenswrapper[5130]: I1212 16:18:56.935842 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.024571 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.107532 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.192043 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.234682 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.238537 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.394772 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.464857 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.551795 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.562529 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.670205 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.863737 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.878223 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.905920 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.939193 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 12 16:18:57 crc kubenswrapper[5130]: I1212 16:18:57.962935 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.002106 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.111630 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.111898 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.155306 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.178053 5130 ???:1] "http: TLS handshake error from 192.168.126.11:54852: no serving certificate available for the kubelet" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.268415 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.295601 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr"] Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.382980 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.384889 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.423878 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.526839 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.612806 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.628133 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.648770 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.692442 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.737437 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.753482 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.781123 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.789752 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr"] Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.791910 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.980709 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 12 16:18:58 crc kubenswrapper[5130]: I1212 16:18:58.996660 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.027222 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.084882 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.118485 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.143486 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.221788 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.500918 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.509012 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.511020 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.595086 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" event={"ID":"5b0a332f-52bd-409b-b5c0-f2723c617bed","Type":"ContainerStarted","Data":"b66d150953376822bc6c3cab5e65005414ae8a9dc0a2df89d533ee1445e51704"} Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.595146 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" event={"ID":"5b0a332f-52bd-409b-b5c0-f2723c617bed","Type":"ContainerStarted","Data":"8e6e27ebbeb78e69b2b8b28991eb52250199f5ef450238666cf895de621d609a"} Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.595599 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.618891 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" podStartSLOduration=59.618870046 podStartE2EDuration="59.618870046s" podCreationTimestamp="2025-12-12 16:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:18:59.616516138 +0000 UTC m=+239.514191000" watchObservedRunningTime="2025-12-12 16:18:59.618870046 +0000 UTC m=+239.516544878" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.678113 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.700628 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.833203 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" Dec 12 16:18:59 crc kubenswrapper[5130]: I1212 16:18:59.998567 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.047302 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.104287 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.187166 5130 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.470379 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.624215 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.651480 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 12 16:19:00 crc kubenswrapper[5130]: I1212 16:19:00.958690 5130 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:01 crc kubenswrapper[5130]: I1212 16:19:01.456255 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 12 16:19:01 crc kubenswrapper[5130]: I1212 16:19:01.484819 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 12 16:19:01 crc kubenswrapper[5130]: I1212 16:19:01.560304 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.453090 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.453615 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.456243 5130 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.458217 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569302 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569373 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569411 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569424 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569487 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569590 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569602 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569662 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.569641 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.570482 5130 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.570512 5130 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.570523 5130 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.570533 5130 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.578115 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.615083 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.615367 5130 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157" exitCode=137 Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.615555 5130 scope.go:117] "RemoveContainer" containerID="aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.615773 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.638016 5130 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.641500 5130 scope.go:117] "RemoveContainer" containerID="aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157" Dec 12 16:19:02 crc kubenswrapper[5130]: E1212 16:19:02.641928 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157\": container with ID starting with aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157 not found: ID does not exist" containerID="aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.642038 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157"} err="failed to get container status \"aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157\": rpc error: code = NotFound desc = could not find container \"aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157\": container with ID starting with aa2c4dbf26adb9aad2b19a085b475603913381cc1f5263507bd75fcf23805157 not found: ID does not exist" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.671741 5130 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:02 crc kubenswrapper[5130]: I1212 16:19:02.839588 5130 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 12 16:19:04 crc kubenswrapper[5130]: I1212 16:19:04.379930 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.454319 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.455268 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" podUID="b2944f3c-2b29-4f86-8a67-59975d09aa88" containerName="controller-manager" containerID="cri-o://77fa94161c98b2b46b52329d1614a29da8d3a632559d23d1ee3160ddf4efb64d" gracePeriod=30 Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.460032 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.460363 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" podUID="a3af7089-05b2-4dcb-947b-3dd784d92815" containerName="route-controller-manager" containerID="cri-o://56879013dbea75eed3d81b6a2b798969c454d33b231e29382b429fb91de7bab6" gracePeriod=30 Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.640569 5130 generic.go:358] "Generic (PLEG): container finished" podID="b2944f3c-2b29-4f86-8a67-59975d09aa88" containerID="77fa94161c98b2b46b52329d1614a29da8d3a632559d23d1ee3160ddf4efb64d" exitCode=0 Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.640651 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" event={"ID":"b2944f3c-2b29-4f86-8a67-59975d09aa88","Type":"ContainerDied","Data":"77fa94161c98b2b46b52329d1614a29da8d3a632559d23d1ee3160ddf4efb64d"} Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.642194 5130 generic.go:358] "Generic (PLEG): container finished" podID="a3af7089-05b2-4dcb-947b-3dd784d92815" containerID="56879013dbea75eed3d81b6a2b798969c454d33b231e29382b429fb91de7bab6" exitCode=0 Dec 12 16:19:06 crc kubenswrapper[5130]: I1212 16:19:06.642317 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" event={"ID":"a3af7089-05b2-4dcb-947b-3dd784d92815","Type":"ContainerDied","Data":"56879013dbea75eed3d81b6a2b798969c454d33b231e29382b429fb91de7bab6"} Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.025258 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.104954 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.106193 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3af7089-05b2-4dcb-947b-3dd784d92815" containerName="route-controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.106308 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3af7089-05b2-4dcb-947b-3dd784d92815" containerName="route-controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.106387 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.106444 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.106644 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.108593 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3af7089-05b2-4dcb-947b-3dd784d92815" containerName="route-controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.127622 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.134791 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config\") pod \"a3af7089-05b2-4dcb-947b-3dd784d92815\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.134865 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert\") pod \"a3af7089-05b2-4dcb-947b-3dd784d92815\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.134896 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp\") pod \"a3af7089-05b2-4dcb-947b-3dd784d92815\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.135042 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj7np\" (UniqueName: \"kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np\") pod \"a3af7089-05b2-4dcb-947b-3dd784d92815\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.135081 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca\") pod \"a3af7089-05b2-4dcb-947b-3dd784d92815\" (UID: \"a3af7089-05b2-4dcb-947b-3dd784d92815\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.136320 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3af7089-05b2-4dcb-947b-3dd784d92815" (UID: "a3af7089-05b2-4dcb-947b-3dd784d92815"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.136648 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp" (OuterVolumeSpecName: "tmp") pod "a3af7089-05b2-4dcb-947b-3dd784d92815" (UID: "a3af7089-05b2-4dcb-947b-3dd784d92815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.137559 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config" (OuterVolumeSpecName: "config") pod "a3af7089-05b2-4dcb-947b-3dd784d92815" (UID: "a3af7089-05b2-4dcb-947b-3dd784d92815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.144651 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np" (OuterVolumeSpecName: "kube-api-access-nj7np") pod "a3af7089-05b2-4dcb-947b-3dd784d92815" (UID: "a3af7089-05b2-4dcb-947b-3dd784d92815"). InnerVolumeSpecName "kube-api-access-nj7np". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.146350 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3af7089-05b2-4dcb-947b-3dd784d92815" (UID: "a3af7089-05b2-4dcb-947b-3dd784d92815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.236073 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.236647 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.236745 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf8bh\" (UniqueName: \"kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.236859 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.236991 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.237208 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp" (OuterVolumeSpecName: "tmp") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.237388 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca\") pod \"b2944f3c-2b29-4f86-8a67-59975d09aa88\" (UID: \"b2944f3c-2b29-4f86-8a67-59975d09aa88\") " Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.237468 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.237614 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config" (OuterVolumeSpecName: "config") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.237841 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238004 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238081 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238143 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3af7089-05b2-4dcb-947b-3dd784d92815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238218 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a3af7089-05b2-4dcb-947b-3dd784d92815-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238302 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238366 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2944f3c-2b29-4f86-8a67-59975d09aa88-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238430 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b2944f3c-2b29-4f86-8a67-59975d09aa88-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238491 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nj7np\" (UniqueName: \"kubernetes.io/projected/a3af7089-05b2-4dcb-947b-3dd784d92815-kube-api-access-nj7np\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.238546 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3af7089-05b2-4dcb-947b-3dd784d92815-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.239549 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.239579 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh" (OuterVolumeSpecName: "kube-api-access-zf8bh") pod "b2944f3c-2b29-4f86-8a67-59975d09aa88" (UID: "b2944f3c-2b29-4f86-8a67-59975d09aa88"). InnerVolumeSpecName "kube-api-access-zf8bh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.300946 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.301008 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.301135 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.302302 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2944f3c-2b29-4f86-8a67-59975d09aa88" containerName="controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.302366 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2944f3c-2b29-4f86-8a67-59975d09aa88" containerName="controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.302523 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2944f3c-2b29-4f86-8a67-59975d09aa88" containerName="controller-manager" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.320106 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.320388 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.339684 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2944f3c-2b29-4f86-8a67-59975d09aa88-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.340012 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf8bh\" (UniqueName: \"kubernetes.io/projected/b2944f3c-2b29-4f86-8a67-59975d09aa88-kube-api-access-zf8bh\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441553 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441613 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5xt\" (UniqueName: \"kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441642 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441669 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441695 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441723 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6nf\" (UniqueName: \"kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441741 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441766 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441792 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441831 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.441887 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.542896 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-js5xt\" (UniqueName: \"kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.542956 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.542979 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.542996 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543016 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6nf\" (UniqueName: \"kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543033 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543053 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543076 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543112 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543141 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.543223 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.544246 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.544803 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.545153 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.545240 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.545542 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.545575 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.546215 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.548450 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.548505 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.561476 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6nf\" (UniqueName: \"kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf\") pod \"controller-manager-7b9f779b68-xk96c\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.561703 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5xt\" (UniqueName: \"kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt\") pod \"route-controller-manager-8fdcdbb66-vvkdl\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.620628 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.643328 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.651247 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" event={"ID":"b2944f3c-2b29-4f86-8a67-59975d09aa88","Type":"ContainerDied","Data":"0f2c183e8f515b2f190e8930f24422ca27de87c62982b51512617516d3516532"} Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.651314 5130 scope.go:117] "RemoveContainer" containerID="77fa94161c98b2b46b52329d1614a29da8d3a632559d23d1ee3160ddf4efb64d" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.651460 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fffb5779-6br5z" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.657264 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.657319 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz" event={"ID":"a3af7089-05b2-4dcb-947b-3dd784d92815","Type":"ContainerDied","Data":"2d3dc2744fa5b4ed8734404b5f41bcb8d9a837bba2ffffb3ba8c6a4da8a52f1e"} Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.699558 5130 scope.go:117] "RemoveContainer" containerID="56879013dbea75eed3d81b6a2b798969c454d33b231e29382b429fb91de7bab6" Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.701430 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.711303 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd47cff9-br6nz"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.716418 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.720619 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fffb5779-6br5z"] Dec 12 16:19:07 crc kubenswrapper[5130]: I1212 16:19:07.912908 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:07 crc kubenswrapper[5130]: W1212 16:19:07.921942 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0bc7bd1_3831_4f42_b4fe_d83030ae43bb.slice/crio-3be55320e701051abeba36d445f32eebf452dd08a6e9fafb9975dd9edab245e4 WatchSource:0}: Error finding container 3be55320e701051abeba36d445f32eebf452dd08a6e9fafb9975dd9edab245e4: Status 404 returned error can't find the container with id 3be55320e701051abeba36d445f32eebf452dd08a6e9fafb9975dd9edab245e4 Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.055155 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:08 crc kubenswrapper[5130]: W1212 16:19:08.063508 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e21d17f_ba99_44c0_9127_7a65e5d9bdca.slice/crio-cea2f31a5364e0661dd115d55f51b171cc11167d004f5da71c03a7ca7d10a457 WatchSource:0}: Error finding container cea2f31a5364e0661dd115d55f51b171cc11167d004f5da71c03a7ca7d10a457: Status 404 returned error can't find the container with id cea2f31a5364e0661dd115d55f51b171cc11167d004f5da71c03a7ca7d10a457 Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.388032 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3af7089-05b2-4dcb-947b-3dd784d92815" path="/var/lib/kubelet/pods/a3af7089-05b2-4dcb-947b-3dd784d92815/volumes" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.389108 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2944f3c-2b29-4f86-8a67-59975d09aa88" path="/var/lib/kubelet/pods/b2944f3c-2b29-4f86-8a67-59975d09aa88/volumes" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.664404 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" event={"ID":"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb","Type":"ContainerStarted","Data":"d38d602658ec9334f9a7ba8bad345171784932eb674addeaf4536f4cb0603b64"} Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.664467 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" event={"ID":"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb","Type":"ContainerStarted","Data":"3be55320e701051abeba36d445f32eebf452dd08a6e9fafb9975dd9edab245e4"} Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.664790 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.668135 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" event={"ID":"7e21d17f-ba99-44c0-9127-7a65e5d9bdca","Type":"ContainerStarted","Data":"c166f142f35d884001a3f5ba324ee7abe9c199e9ef7778fa4c2c6651c2800dba"} Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.668199 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" event={"ID":"7e21d17f-ba99-44c0-9127-7a65e5d9bdca","Type":"ContainerStarted","Data":"cea2f31a5364e0661dd115d55f51b171cc11167d004f5da71c03a7ca7d10a457"} Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.668396 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.672969 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.685264 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" podStartSLOduration=2.685248851 podStartE2EDuration="2.685248851s" podCreationTimestamp="2025-12-12 16:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:08.680393781 +0000 UTC m=+248.578068613" watchObservedRunningTime="2025-12-12 16:19:08.685248851 +0000 UTC m=+248.582923683" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.702775 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" podStartSLOduration=2.702757884 podStartE2EDuration="2.702757884s" podCreationTimestamp="2025-12-12 16:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:08.698703594 +0000 UTC m=+248.596378426" watchObservedRunningTime="2025-12-12 16:19:08.702757884 +0000 UTC m=+248.600432716" Dec 12 16:19:08 crc kubenswrapper[5130]: I1212 16:19:08.863652 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:18 crc kubenswrapper[5130]: I1212 16:19:18.744882 5130 generic.go:358] "Generic (PLEG): container finished" podID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerID="f6800f29ce6dfd01bbd7f9c0b999d8c7c936dd1b2b43419d7987576203561f95" exitCode=0 Dec 12 16:19:18 crc kubenswrapper[5130]: I1212 16:19:18.744997 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerDied","Data":"f6800f29ce6dfd01bbd7f9c0b999d8c7c936dd1b2b43419d7987576203561f95"} Dec 12 16:19:18 crc kubenswrapper[5130]: I1212 16:19:18.746068 5130 scope.go:117] "RemoveContainer" containerID="f6800f29ce6dfd01bbd7f9c0b999d8c7c936dd1b2b43419d7987576203561f95" Dec 12 16:19:19 crc kubenswrapper[5130]: I1212 16:19:19.753899 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerStarted","Data":"65636c30f22479721b311e85c17fa731da3fa4e60c7c20a52840f091af2f46a8"} Dec 12 16:19:19 crc kubenswrapper[5130]: I1212 16:19:19.754715 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:19:19 crc kubenswrapper[5130]: I1212 16:19:19.759978 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:19:22 crc kubenswrapper[5130]: I1212 16:19:22.730215 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:19:22 crc kubenswrapper[5130]: I1212 16:19:22.730894 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:19:22 crc kubenswrapper[5130]: I1212 16:19:22.730997 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:19:22 crc kubenswrapper[5130]: I1212 16:19:22.732395 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:19:22 crc kubenswrapper[5130]: I1212 16:19:22.732613 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f" gracePeriod=600 Dec 12 16:19:22 crc kubenswrapper[5130]: E1212 16:19:22.871001 5130 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eed03e3_b46f_4ae0_a063_d9a0d64c3a7e.slice/crio-945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:19:23 crc kubenswrapper[5130]: I1212 16:19:23.779397 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f" exitCode=0 Dec 12 16:19:23 crc kubenswrapper[5130]: I1212 16:19:23.779546 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f"} Dec 12 16:19:23 crc kubenswrapper[5130]: I1212 16:19:23.780224 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668"} Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.392879 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.393490 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" podUID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" containerName="controller-manager" containerID="cri-o://d38d602658ec9334f9a7ba8bad345171784932eb674addeaf4536f4cb0603b64" gracePeriod=30 Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.417231 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.417531 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" podUID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" containerName="route-controller-manager" containerID="cri-o://c166f142f35d884001a3f5ba324ee7abe9c199e9ef7778fa4c2c6651c2800dba" gracePeriod=30 Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.521282 5130 ???:1] "http: TLS handshake error from 192.168.126.11:59594: no serving certificate available for the kubelet" Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.799528 5130 generic.go:358] "Generic (PLEG): container finished" podID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" containerID="d38d602658ec9334f9a7ba8bad345171784932eb674addeaf4536f4cb0603b64" exitCode=0 Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.799607 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" event={"ID":"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb","Type":"ContainerDied","Data":"d38d602658ec9334f9a7ba8bad345171784932eb674addeaf4536f4cb0603b64"} Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.800820 5130 generic.go:358] "Generic (PLEG): container finished" podID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" containerID="c166f142f35d884001a3f5ba324ee7abe9c199e9ef7778fa4c2c6651c2800dba" exitCode=0 Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.800948 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" event={"ID":"7e21d17f-ba99-44c0-9127-7a65e5d9bdca","Type":"ContainerDied","Data":"c166f142f35d884001a3f5ba324ee7abe9c199e9ef7778fa4c2c6651c2800dba"} Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.945164 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.977457 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.979625 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" containerName="route-controller-manager" Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.979668 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" containerName="route-controller-manager" Dec 12 16:19:26 crc kubenswrapper[5130]: I1212 16:19:26.979839 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" containerName="route-controller-manager" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020019 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert\") pod \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020120 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp\") pod \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020271 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js5xt\" (UniqueName: \"kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt\") pod \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020347 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca\") pod \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020388 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config\") pod \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\" (UID: \"7e21d17f-ba99-44c0-9127-7a65e5d9bdca\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.020803 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp" (OuterVolumeSpecName: "tmp") pod "7e21d17f-ba99-44c0-9127-7a65e5d9bdca" (UID: "7e21d17f-ba99-44c0-9127-7a65e5d9bdca"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.021424 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config" (OuterVolumeSpecName: "config") pod "7e21d17f-ba99-44c0-9127-7a65e5d9bdca" (UID: "7e21d17f-ba99-44c0-9127-7a65e5d9bdca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.021915 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e21d17f-ba99-44c0-9127-7a65e5d9bdca" (UID: "7e21d17f-ba99-44c0-9127-7a65e5d9bdca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.027136 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e21d17f-ba99-44c0-9127-7a65e5d9bdca" (UID: "7e21d17f-ba99-44c0-9127-7a65e5d9bdca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.039404 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt" (OuterVolumeSpecName: "kube-api-access-js5xt") pod "7e21d17f-ba99-44c0-9127-7a65e5d9bdca" (UID: "7e21d17f-ba99-44c0-9127-7a65e5d9bdca"). InnerVolumeSpecName "kube-api-access-js5xt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.078659 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.078844 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.121880 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.121913 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.121922 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-js5xt\" (UniqueName: \"kubernetes.io/projected/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-kube-api-access-js5xt\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.121931 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.121939 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e21d17f-ba99-44c0-9127-7a65e5d9bdca-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.161230 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.184243 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.184783 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" containerName="controller-manager" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.184799 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" containerName="controller-manager" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.184907 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" containerName="controller-manager" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.222786 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg6nf\" (UniqueName: \"kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223116 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223298 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223456 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp" (OuterVolumeSpecName: "tmp") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223681 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223787 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.224554 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles\") pod \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\" (UID: \"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb\") " Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.224849 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.224940 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.223952 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config" (OuterVolumeSpecName: "config") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225123 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8n7\" (UniqueName: \"kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.224068 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.224876 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225289 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225341 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225548 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225571 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225582 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.225591 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.228226 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf" (OuterVolumeSpecName: "kube-api-access-fg6nf") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "kube-api-access-fg6nf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.228249 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" (UID: "c0bc7bd1-3831-4f42-b4fe-d83030ae43bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326381 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8n7\" (UniqueName: \"kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326432 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326457 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326496 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326511 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326560 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.326570 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fg6nf\" (UniqueName: \"kubernetes.io/projected/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb-kube-api-access-fg6nf\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.327384 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.327431 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.328191 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.331041 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.344883 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8n7\" (UniqueName: \"kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7\") pod \"route-controller-manager-bf6bf5794-d5zzt\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.397769 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.537128 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.537291 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.631796 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.632243 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.632368 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.632472 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twq54\" (UniqueName: \"kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.632509 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.632544 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734438 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734571 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734626 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-twq54\" (UniqueName: \"kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734664 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734767 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.734819 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.735638 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.736231 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.736765 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.737422 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.744484 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.752832 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-twq54\" (UniqueName: \"kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54\") pod \"controller-manager-79d797b698-v4v6j\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.807343 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.810076 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.810128 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-xk96c" event={"ID":"c0bc7bd1-3831-4f42-b4fe-d83030ae43bb","Type":"ContainerDied","Data":"3be55320e701051abeba36d445f32eebf452dd08a6e9fafb9975dd9edab245e4"} Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.810204 5130 scope.go:117] "RemoveContainer" containerID="d38d602658ec9334f9a7ba8bad345171784932eb674addeaf4536f4cb0603b64" Dec 12 16:19:27 crc kubenswrapper[5130]: W1212 16:19:27.811027 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fad0dc5_4596_4305_9545_f2525bf2a5f6.slice/crio-a617fc7065f1a27b47bb99a0229d4625e224ad99323bbfe378c7893aeb2e13f9 WatchSource:0}: Error finding container a617fc7065f1a27b47bb99a0229d4625e224ad99323bbfe378c7893aeb2e13f9: Status 404 returned error can't find the container with id a617fc7065f1a27b47bb99a0229d4625e224ad99323bbfe378c7893aeb2e13f9 Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.813234 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" event={"ID":"7e21d17f-ba99-44c0-9127-7a65e5d9bdca","Type":"ContainerDied","Data":"cea2f31a5364e0661dd115d55f51b171cc11167d004f5da71c03a7ca7d10a457"} Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.813245 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.840746 5130 scope.go:117] "RemoveContainer" containerID="c166f142f35d884001a3f5ba324ee7abe9c199e9ef7778fa4c2c6651c2800dba" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.851037 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.852772 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:27 crc kubenswrapper[5130]: I1212 16:19:27.854898 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-vvkdl"] Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.455931 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e21d17f-ba99-44c0-9127-7a65e5d9bdca" path="/var/lib/kubelet/pods/7e21d17f-ba99-44c0-9127-7a65e5d9bdca/volumes" Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.456994 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.457055 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-xk96c"] Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.567399 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.820370 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" event={"ID":"1fad0dc5-4596-4305-9545-f2525bf2a5f6","Type":"ContainerStarted","Data":"8fe3222073eba01c686e68480538e777dc4f9e27f3286132426020a2f9728e94"} Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.820754 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" event={"ID":"1fad0dc5-4596-4305-9545-f2525bf2a5f6","Type":"ContainerStarted","Data":"a617fc7065f1a27b47bb99a0229d4625e224ad99323bbfe378c7893aeb2e13f9"} Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.821060 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.825507 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" event={"ID":"f239ae24-879a-4441-8fdf-e35f8be83d86","Type":"ContainerStarted","Data":"5d939f69d20ddb862683505a772b19a2ecf0d2d384588657426f95f9db9bdcb6"} Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.825562 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" event={"ID":"f239ae24-879a-4441-8fdf-e35f8be83d86","Type":"ContainerStarted","Data":"9053e2f67cbba91f751ed0b628915367d15c8f735f68236bc3df7299c22cf5ce"} Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.825830 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.843640 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" podStartSLOduration=2.843614177 podStartE2EDuration="2.843614177s" podCreationTimestamp="2025-12-12 16:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:28.838048457 +0000 UTC m=+268.735723289" watchObservedRunningTime="2025-12-12 16:19:28.843614177 +0000 UTC m=+268.741289029" Dec 12 16:19:28 crc kubenswrapper[5130]: I1212 16:19:28.854827 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" podStartSLOduration=2.854804678 podStartE2EDuration="2.854804678s" podCreationTimestamp="2025-12-12 16:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:28.852584782 +0000 UTC m=+268.750259634" watchObservedRunningTime="2025-12-12 16:19:28.854804678 +0000 UTC m=+268.752479510" Dec 12 16:19:29 crc kubenswrapper[5130]: I1212 16:19:29.384064 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:29 crc kubenswrapper[5130]: I1212 16:19:29.512287 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:19:30 crc kubenswrapper[5130]: I1212 16:19:30.383895 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0bc7bd1-3831-4f42-b4fe-d83030ae43bb" path="/var/lib/kubelet/pods/c0bc7bd1-3831-4f42-b4fe-d83030ae43bb/volumes" Dec 12 16:19:46 crc kubenswrapper[5130]: I1212 16:19:46.394443 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:46 crc kubenswrapper[5130]: I1212 16:19:46.395298 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" podUID="f239ae24-879a-4441-8fdf-e35f8be83d86" containerName="controller-manager" containerID="cri-o://5d939f69d20ddb862683505a772b19a2ecf0d2d384588657426f95f9db9bdcb6" gracePeriod=30 Dec 12 16:19:46 crc kubenswrapper[5130]: I1212 16:19:46.930029 5130 generic.go:358] "Generic (PLEG): container finished" podID="f239ae24-879a-4441-8fdf-e35f8be83d86" containerID="5d939f69d20ddb862683505a772b19a2ecf0d2d384588657426f95f9db9bdcb6" exitCode=0 Dec 12 16:19:46 crc kubenswrapper[5130]: I1212 16:19:46.930124 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" event={"ID":"f239ae24-879a-4441-8fdf-e35f8be83d86","Type":"ContainerDied","Data":"5d939f69d20ddb862683505a772b19a2ecf0d2d384588657426f95f9db9bdcb6"} Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.089334 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.113418 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-rhrzf"] Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.114147 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f239ae24-879a-4441-8fdf-e35f8be83d86" containerName="controller-manager" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.114225 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="f239ae24-879a-4441-8fdf-e35f8be83d86" containerName="controller-manager" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.114369 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="f239ae24-879a-4441-8fdf-e35f8be83d86" containerName="controller-manager" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.122581 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.130300 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-rhrzf"] Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208484 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208547 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twq54\" (UniqueName: \"kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208632 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208726 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208748 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208798 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles\") pod \"f239ae24-879a-4441-8fdf-e35f8be83d86\" (UID: \"f239ae24-879a-4441-8fdf-e35f8be83d86\") " Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208894 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208929 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-client-ca\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208967 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7313ab95-a89a-4df9-a791-1d048a6beba9-serving-cert\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.208991 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jvp\" (UniqueName: \"kubernetes.io/projected/7313ab95-a89a-4df9-a791-1d048a6beba9-kube-api-access-f9jvp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.209016 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-config\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.209048 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7313ab95-a89a-4df9-a791-1d048a6beba9-tmp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.209504 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config" (OuterVolumeSpecName: "config") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.209743 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp" (OuterVolumeSpecName: "tmp") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.209938 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca" (OuterVolumeSpecName: "client-ca") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.210061 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.215348 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54" (OuterVolumeSpecName: "kube-api-access-twq54") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "kube-api-access-twq54". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.215439 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f239ae24-879a-4441-8fdf-e35f8be83d86" (UID: "f239ae24-879a-4441-8fdf-e35f8be83d86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310816 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7313ab95-a89a-4df9-a791-1d048a6beba9-tmp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310886 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310919 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-client-ca\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310954 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7313ab95-a89a-4df9-a791-1d048a6beba9-serving-cert\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310976 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jvp\" (UniqueName: \"kubernetes.io/projected/7313ab95-a89a-4df9-a791-1d048a6beba9-kube-api-access-f9jvp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.310999 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-config\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311039 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f239ae24-879a-4441-8fdf-e35f8be83d86-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311051 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311061 5130 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311069 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f239ae24-879a-4441-8fdf-e35f8be83d86-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311078 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twq54\" (UniqueName: \"kubernetes.io/projected/f239ae24-879a-4441-8fdf-e35f8be83d86-kube-api-access-twq54\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311087 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f239ae24-879a-4441-8fdf-e35f8be83d86-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.311933 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7313ab95-a89a-4df9-a791-1d048a6beba9-tmp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.312419 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-config\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.313141 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-proxy-ca-bundles\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.313565 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7313ab95-a89a-4df9-a791-1d048a6beba9-client-ca\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.315312 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7313ab95-a89a-4df9-a791-1d048a6beba9-serving-cert\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.334445 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jvp\" (UniqueName: \"kubernetes.io/projected/7313ab95-a89a-4df9-a791-1d048a6beba9-kube-api-access-f9jvp\") pod \"controller-manager-7b9f779b68-rhrzf\" (UID: \"7313ab95-a89a-4df9-a791-1d048a6beba9\") " pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.436490 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.865643 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b9f779b68-rhrzf"] Dec 12 16:19:47 crc kubenswrapper[5130]: W1212 16:19:47.880892 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7313ab95_a89a_4df9_a791_1d048a6beba9.slice/crio-2442fefb3ec630c1459249d541dea75bdd7b8cce13dfc98f86cd71a04f5a5896 WatchSource:0}: Error finding container 2442fefb3ec630c1459249d541dea75bdd7b8cce13dfc98f86cd71a04f5a5896: Status 404 returned error can't find the container with id 2442fefb3ec630c1459249d541dea75bdd7b8cce13dfc98f86cd71a04f5a5896 Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.941100 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" event={"ID":"f239ae24-879a-4441-8fdf-e35f8be83d86","Type":"ContainerDied","Data":"9053e2f67cbba91f751ed0b628915367d15c8f735f68236bc3df7299c22cf5ce"} Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.941195 5130 scope.go:117] "RemoveContainer" containerID="5d939f69d20ddb862683505a772b19a2ecf0d2d384588657426f95f9db9bdcb6" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.941384 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d797b698-v4v6j" Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.944155 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" event={"ID":"7313ab95-a89a-4df9-a791-1d048a6beba9","Type":"ContainerStarted","Data":"2442fefb3ec630c1459249d541dea75bdd7b8cce13dfc98f86cd71a04f5a5896"} Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.987600 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:47 crc kubenswrapper[5130]: I1212 16:19:47.991120 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79d797b698-v4v6j"] Dec 12 16:19:48 crc kubenswrapper[5130]: I1212 16:19:48.378447 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f239ae24-879a-4441-8fdf-e35f8be83d86" path="/var/lib/kubelet/pods/f239ae24-879a-4441-8fdf-e35f8be83d86/volumes" Dec 12 16:19:48 crc kubenswrapper[5130]: I1212 16:19:48.953841 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" event={"ID":"7313ab95-a89a-4df9-a791-1d048a6beba9","Type":"ContainerStarted","Data":"f06404e563572eddb1696b4df8f1fcf36f6142402e41d501f00895bd235c987e"} Dec 12 16:19:48 crc kubenswrapper[5130]: I1212 16:19:48.983451 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" podStartSLOduration=2.983435275 podStartE2EDuration="2.983435275s" podCreationTimestamp="2025-12-12 16:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:19:48.980364018 +0000 UTC m=+288.878038890" watchObservedRunningTime="2025-12-12 16:19:48.983435275 +0000 UTC m=+288.881110107" Dec 12 16:19:49 crc kubenswrapper[5130]: I1212 16:19:49.958833 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:19:49 crc kubenswrapper[5130]: I1212 16:19:49.964246 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b9f779b68-rhrzf" Dec 12 16:20:00 crc kubenswrapper[5130]: I1212 16:20:00.545844 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:20:00 crc kubenswrapper[5130]: I1212 16:20:00.545844 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:20:19 crc kubenswrapper[5130]: I1212 16:20:19.439331 5130 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.779621 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.780844 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvzzz" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="registry-server" containerID="cri-o://8a404432fdc03966e4b4413b026d4d5da46820bf9ded19a3ceb42d61ab1be328" gracePeriod=30 Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.789938 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.790532 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2gt6h" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="registry-server" containerID="cri-o://5fb7a27d9d232fecf29af0ea2cf521c7fcffd29cc516ee00c9b3fdc12860c3c9" gracePeriod=30 Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.798634 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.798972 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" containerID="cri-o://65636c30f22479721b311e85c17fa731da3fa4e60c7c20a52840f091af2f46a8" gracePeriod=30 Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.805389 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.805830 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s7x92" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="registry-server" containerID="cri-o://56b6b5fa1fdb979a756c382f6c6262c415947ed7dac44278f932ddd7ef046da8" gracePeriod=30 Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.822372 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.822850 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ndfc" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="registry-server" containerID="cri-o://e5ac6bb6b6a834b1d5556d9b1331cd2084885f081082cc31d77c1b8643f8d55b" gracePeriod=30 Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.835395 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4vhrb"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.932741 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4vhrb"] Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.932985 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.965969 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a0e237f-ebef-42b0-ad96-926e15307914-tmp\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.966220 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxsc\" (UniqueName: \"kubernetes.io/projected/9a0e237f-ebef-42b0-ad96-926e15307914-kube-api-access-8cxsc\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.966409 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:35 crc kubenswrapper[5130]: I1212 16:20:35.966564 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.068511 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a0e237f-ebef-42b0-ad96-926e15307914-tmp\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.068581 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8cxsc\" (UniqueName: \"kubernetes.io/projected/9a0e237f-ebef-42b0-ad96-926e15307914-kube-api-access-8cxsc\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.068636 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.068674 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.070731 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a0e237f-ebef-42b0-ad96-926e15307914-tmp\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.070839 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.080495 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a0e237f-ebef-42b0-ad96-926e15307914-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.089962 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cxsc\" (UniqueName: \"kubernetes.io/projected/9a0e237f-ebef-42b0-ad96-926e15307914-kube-api-access-8cxsc\") pod \"marketplace-operator-547dbd544d-4vhrb\" (UID: \"9a0e237f-ebef-42b0-ad96-926e15307914\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.274926 5130 generic.go:358] "Generic (PLEG): container finished" podID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerID="8a404432fdc03966e4b4413b026d4d5da46820bf9ded19a3ceb42d61ab1be328" exitCode=0 Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.275279 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerDied","Data":"8a404432fdc03966e4b4413b026d4d5da46820bf9ded19a3ceb42d61ab1be328"} Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.277391 5130 generic.go:358] "Generic (PLEG): container finished" podID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerID="65636c30f22479721b311e85c17fa731da3fa4e60c7c20a52840f091af2f46a8" exitCode=0 Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.277424 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerDied","Data":"65636c30f22479721b311e85c17fa731da3fa4e60c7c20a52840f091af2f46a8"} Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.277741 5130 scope.go:117] "RemoveContainer" containerID="f6800f29ce6dfd01bbd7f9c0b999d8c7c936dd1b2b43419d7987576203561f95" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.281020 5130 generic.go:358] "Generic (PLEG): container finished" podID="3686d912-c8e4-413f-b036-f206a4e826a2" containerID="5fb7a27d9d232fecf29af0ea2cf521c7fcffd29cc516ee00c9b3fdc12860c3c9" exitCode=0 Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.281091 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerDied","Data":"5fb7a27d9d232fecf29af0ea2cf521c7fcffd29cc516ee00c9b3fdc12860c3c9"} Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.283289 5130 generic.go:358] "Generic (PLEG): container finished" podID="573d2658-6034-4715-a9ad-a7828b324fd5" containerID="e5ac6bb6b6a834b1d5556d9b1331cd2084885f081082cc31d77c1b8643f8d55b" exitCode=0 Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.283382 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerDied","Data":"e5ac6bb6b6a834b1d5556d9b1331cd2084885f081082cc31d77c1b8643f8d55b"} Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.285775 5130 generic.go:358] "Generic (PLEG): container finished" podID="1aaf652b-1019-4193-839d-875d12cc1e27" containerID="56b6b5fa1fdb979a756c382f6c6262c415947ed7dac44278f932ddd7ef046da8" exitCode=0 Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.285955 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerDied","Data":"56b6b5fa1fdb979a756c382f6c6262c415947ed7dac44278f932ddd7ef046da8"} Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.303989 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.318716 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.378310 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5knbf\" (UniqueName: \"kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf\") pod \"573d2658-6034-4715-a9ad-a7828b324fd5\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.378392 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities\") pod \"573d2658-6034-4715-a9ad-a7828b324fd5\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.378541 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content\") pod \"573d2658-6034-4715-a9ad-a7828b324fd5\" (UID: \"573d2658-6034-4715-a9ad-a7828b324fd5\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.381689 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities" (OuterVolumeSpecName: "utilities") pod "573d2658-6034-4715-a9ad-a7828b324fd5" (UID: "573d2658-6034-4715-a9ad-a7828b324fd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.385586 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf" (OuterVolumeSpecName: "kube-api-access-5knbf") pod "573d2658-6034-4715-a9ad-a7828b324fd5" (UID: "573d2658-6034-4715-a9ad-a7828b324fd5"). InnerVolumeSpecName "kube-api-access-5knbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.480336 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5knbf\" (UniqueName: \"kubernetes.io/projected/573d2658-6034-4715-a9ad-a7828b324fd5-kube-api-access-5knbf\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.480373 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.526772 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "573d2658-6034-4715-a9ad-a7828b324fd5" (UID: "573d2658-6034-4715-a9ad-a7828b324fd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.581785 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573d2658-6034-4715-a9ad-a7828b324fd5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.750267 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-4vhrb"] Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.757871 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.793060 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.884781 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcdz8\" (UniqueName: \"kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8\") pod \"3686d912-c8e4-413f-b036-f206a4e826a2\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.884879 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities\") pod \"3686d912-c8e4-413f-b036-f206a4e826a2\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.884982 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content\") pod \"3686d912-c8e4-413f-b036-f206a4e826a2\" (UID: \"3686d912-c8e4-413f-b036-f206a4e826a2\") " Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.886684 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities" (OuterVolumeSpecName: "utilities") pod "3686d912-c8e4-413f-b036-f206a4e826a2" (UID: "3686d912-c8e4-413f-b036-f206a4e826a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.894651 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8" (OuterVolumeSpecName: "kube-api-access-hcdz8") pod "3686d912-c8e4-413f-b036-f206a4e826a2" (UID: "3686d912-c8e4-413f-b036-f206a4e826a2"). InnerVolumeSpecName "kube-api-access-hcdz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.950528 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3686d912-c8e4-413f-b036-f206a4e826a2" (UID: "3686d912-c8e4-413f-b036-f206a4e826a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.988885 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hcdz8\" (UniqueName: \"kubernetes.io/projected/3686d912-c8e4-413f-b036-f206a4e826a2-kube-api-access-hcdz8\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.988928 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:36 crc kubenswrapper[5130]: I1212 16:20:36.988942 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3686d912-c8e4-413f-b036-f206a4e826a2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.026756 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.043796 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.048803 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092025 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp\") pod \"1de41ef3-7896-4e9c-8201-8174bc4468c4\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092124 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content\") pod \"f1a12a40-8493-41e1-84b7-312fc948fca8\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092189 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities\") pod \"1aaf652b-1019-4193-839d-875d12cc1e27\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092228 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lmp7\" (UniqueName: \"kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7\") pod \"f1a12a40-8493-41e1-84b7-312fc948fca8\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092310 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content\") pod \"1aaf652b-1019-4193-839d-875d12cc1e27\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092375 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics\") pod \"1de41ef3-7896-4e9c-8201-8174bc4468c4\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092454 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities\") pod \"f1a12a40-8493-41e1-84b7-312fc948fca8\" (UID: \"f1a12a40-8493-41e1-84b7-312fc948fca8\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092523 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca\") pod \"1de41ef3-7896-4e9c-8201-8174bc4468c4\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092572 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw9w8\" (UniqueName: \"kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8\") pod \"1aaf652b-1019-4193-839d-875d12cc1e27\" (UID: \"1aaf652b-1019-4193-839d-875d12cc1e27\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.092619 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4xfb\" (UniqueName: \"kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb\") pod \"1de41ef3-7896-4e9c-8201-8174bc4468c4\" (UID: \"1de41ef3-7896-4e9c-8201-8174bc4468c4\") " Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.096395 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1de41ef3-7896-4e9c-8201-8174bc4468c4" (UID: "1de41ef3-7896-4e9c-8201-8174bc4468c4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.097609 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities" (OuterVolumeSpecName: "utilities") pod "f1a12a40-8493-41e1-84b7-312fc948fca8" (UID: "f1a12a40-8493-41e1-84b7-312fc948fca8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.102980 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1de41ef3-7896-4e9c-8201-8174bc4468c4" (UID: "1de41ef3-7896-4e9c-8201-8174bc4468c4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.106108 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7" (OuterVolumeSpecName: "kube-api-access-7lmp7") pod "f1a12a40-8493-41e1-84b7-312fc948fca8" (UID: "f1a12a40-8493-41e1-84b7-312fc948fca8"). InnerVolumeSpecName "kube-api-access-7lmp7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.107698 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities" (OuterVolumeSpecName: "utilities") pod "1aaf652b-1019-4193-839d-875d12cc1e27" (UID: "1aaf652b-1019-4193-839d-875d12cc1e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.108002 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp" (OuterVolumeSpecName: "tmp") pod "1de41ef3-7896-4e9c-8201-8174bc4468c4" (UID: "1de41ef3-7896-4e9c-8201-8174bc4468c4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.111879 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8" (OuterVolumeSpecName: "kube-api-access-xw9w8") pod "1aaf652b-1019-4193-839d-875d12cc1e27" (UID: "1aaf652b-1019-4193-839d-875d12cc1e27"). InnerVolumeSpecName "kube-api-access-xw9w8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.140609 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb" (OuterVolumeSpecName: "kube-api-access-q4xfb") pod "1de41ef3-7896-4e9c-8201-8174bc4468c4" (UID: "1de41ef3-7896-4e9c-8201-8174bc4468c4"). InnerVolumeSpecName "kube-api-access-q4xfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.174055 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1aaf652b-1019-4193-839d-875d12cc1e27" (UID: "1aaf652b-1019-4193-839d-875d12cc1e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.191594 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1a12a40-8493-41e1-84b7-312fc948fca8" (UID: "f1a12a40-8493-41e1-84b7-312fc948fca8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194431 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194486 5130 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194498 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xw9w8\" (UniqueName: \"kubernetes.io/projected/1aaf652b-1019-4193-839d-875d12cc1e27-kube-api-access-xw9w8\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194508 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4xfb\" (UniqueName: \"kubernetes.io/projected/1de41ef3-7896-4e9c-8201-8174bc4468c4-kube-api-access-q4xfb\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194518 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1de41ef3-7896-4e9c-8201-8174bc4468c4-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194527 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a12a40-8493-41e1-84b7-312fc948fca8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194535 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194543 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7lmp7\" (UniqueName: \"kubernetes.io/projected/f1a12a40-8493-41e1-84b7-312fc948fca8-kube-api-access-7lmp7\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194551 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aaf652b-1019-4193-839d-875d12cc1e27-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.194559 5130 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1de41ef3-7896-4e9c-8201-8174bc4468c4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.293326 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" event={"ID":"1de41ef3-7896-4e9c-8201-8174bc4468c4","Type":"ContainerDied","Data":"80eb6b504f4b6d215a1fdd56503837348aea4e832c71cca42b9c33074674fdba"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.293381 5130 scope.go:117] "RemoveContainer" containerID="65636c30f22479721b311e85c17fa731da3fa4e60c7c20a52840f091af2f46a8" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.293431 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xpvsb" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.296616 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gt6h" event={"ID":"3686d912-c8e4-413f-b036-f206a4e826a2","Type":"ContainerDied","Data":"5cc1da989e963af873e82696b122995145445095ec336e5b958ae3ddef9bfffd"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.296991 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gt6h" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.303058 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ndfc" event={"ID":"573d2658-6034-4715-a9ad-a7828b324fd5","Type":"ContainerDied","Data":"ac163b2bf1c1d578b9037f0b59dae7dd262bb9d00e98558c9f328edeb8dabdb0"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.303248 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ndfc" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.305580 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7x92" event={"ID":"1aaf652b-1019-4193-839d-875d12cc1e27","Type":"ContainerDied","Data":"f1da0765a97fe218a374080c0f1f06e2731cd63af36a922455361a4960727e20"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.305729 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7x92" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.311424 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" event={"ID":"9a0e237f-ebef-42b0-ad96-926e15307914","Type":"ContainerStarted","Data":"a8a69d9c438c28bfef032835cbc78a80be6b2a221427f7fcdc8707dccaf07128"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.311499 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" event={"ID":"9a0e237f-ebef-42b0-ad96-926e15307914","Type":"ContainerStarted","Data":"1ebc495b03b4eaae43e1bfdbf980668073684b96795435a1d924a5270ea75ef3"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.312090 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.314127 5130 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-4vhrb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.314205 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" podUID="9a0e237f-ebef-42b0-ad96-926e15307914" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.318988 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvzzz" event={"ID":"f1a12a40-8493-41e1-84b7-312fc948fca8","Type":"ContainerDied","Data":"70771a8a130e6322df73890d22e5b58e9c784d9164e5ed9740d937291a171571"} Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.319119 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvzzz" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.339902 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" podStartSLOduration=2.339877342 podStartE2EDuration="2.339877342s" podCreationTimestamp="2025-12-12 16:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:20:37.333945297 +0000 UTC m=+337.231620149" watchObservedRunningTime="2025-12-12 16:20:37.339877342 +0000 UTC m=+337.237552174" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.348390 5130 scope.go:117] "RemoveContainer" containerID="5fb7a27d9d232fecf29af0ea2cf521c7fcffd29cc516ee00c9b3fdc12860c3c9" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.369433 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.374481 5130 scope.go:117] "RemoveContainer" containerID="ae7e967711e223d099a40d4ed44911cbe8c26c71f4671c594e5898a37bde8057" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.383368 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xpvsb"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.393987 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.398707 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2gt6h"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.399097 5130 scope.go:117] "RemoveContainer" containerID="0b4113c7d36d2a230bc4e2acb1da128399bd31376c24477255787da86e629e81" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.403114 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.410372 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7x92"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.416002 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.422215 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvzzz"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.426020 5130 scope.go:117] "RemoveContainer" containerID="e5ac6bb6b6a834b1d5556d9b1331cd2084885f081082cc31d77c1b8643f8d55b" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.428002 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.431637 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ndfc"] Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.465072 5130 scope.go:117] "RemoveContainer" containerID="401e173f0e693614a546c3cea9ff0cace58c184cd9cdd3104503b186b8193d00" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.483902 5130 scope.go:117] "RemoveContainer" containerID="a44d2a4eeeeb09f66e7765e59ee141b97a02eccc0257c3866e57084f4a9d1b9b" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.502853 5130 scope.go:117] "RemoveContainer" containerID="56b6b5fa1fdb979a756c382f6c6262c415947ed7dac44278f932ddd7ef046da8" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.525848 5130 scope.go:117] "RemoveContainer" containerID="8b6e0f771c54e0f2031e922831f4e9a8890ad74e45ce729b1967a4918169b40b" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.545924 5130 scope.go:117] "RemoveContainer" containerID="40faa368c7bb6179b1e51cd173a9e13967aa1bdeffc22c992fde0f7dda5ed0fe" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.563195 5130 scope.go:117] "RemoveContainer" containerID="8a404432fdc03966e4b4413b026d4d5da46820bf9ded19a3ceb42d61ab1be328" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.581000 5130 scope.go:117] "RemoveContainer" containerID="4510f8c6500cd79ead24de9fdb8d77ed1941057499119f5133a4d37c2a96bbc5" Dec 12 16:20:37 crc kubenswrapper[5130]: I1212 16:20:37.604259 5130 scope.go:117] "RemoveContainer" containerID="b7222411b3f5b2c07c23cec910ae8077781b1ba52eee3ba591530d28314e3557" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.342164 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-4vhrb" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.377730 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" path="/var/lib/kubelet/pods/1aaf652b-1019-4193-839d-875d12cc1e27/volumes" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.378666 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" path="/var/lib/kubelet/pods/1de41ef3-7896-4e9c-8201-8174bc4468c4/volumes" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.379396 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" path="/var/lib/kubelet/pods/3686d912-c8e4-413f-b036-f206a4e826a2/volumes" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.382812 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" path="/var/lib/kubelet/pods/573d2658-6034-4715-a9ad-a7828b324fd5/volumes" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.383582 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" path="/var/lib/kubelet/pods/f1a12a40-8493-41e1-84b7-312fc948fca8/volumes" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.395841 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wqdb8"] Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397085 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397109 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397127 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397138 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397189 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397200 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397215 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397220 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397227 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397232 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397274 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397287 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397356 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397370 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397382 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397390 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="extract-utilities" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397400 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397438 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397450 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397457 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397465 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397473 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397480 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.397992 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.398004 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.398009 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="extract-content" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.398135 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.398147 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400373 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1a12a40-8493-41e1-84b7-312fc948fca8" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400437 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3686d912-c8e4-413f-b036-f206a4e826a2" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400457 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="573d2658-6034-4715-a9ad-a7828b324fd5" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400470 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="1aaf652b-1019-4193-839d-875d12cc1e27" containerName="registry-server" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400723 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.400740 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de41ef3-7896-4e9c-8201-8174bc4468c4" containerName="marketplace-operator" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.423650 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqdb8"] Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.424105 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.428603 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.513889 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-catalog-content\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.514108 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-utilities\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.514333 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjp6\" (UniqueName: \"kubernetes.io/projected/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-kube-api-access-5cjp6\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.615997 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-utilities\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.616116 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjp6\" (UniqueName: \"kubernetes.io/projected/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-kube-api-access-5cjp6\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.616221 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-catalog-content\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.616740 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-catalog-content\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.616966 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-utilities\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.641567 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjp6\" (UniqueName: \"kubernetes.io/projected/c82ddae8-4dc3-4d48-96b1-cd9613cc32c3-kube-api-access-5cjp6\") pod \"redhat-operators-wqdb8\" (UID: \"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3\") " pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:38 crc kubenswrapper[5130]: I1212 16:20:38.748316 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.157137 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqdb8"] Dec 12 16:20:39 crc kubenswrapper[5130]: W1212 16:20:39.161690 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82ddae8_4dc3_4d48_96b1_cd9613cc32c3.slice/crio-08d678588a91fedaa50b05baac41cdac2d5c0355efa6380e596d1d26d3cd8ee4 WatchSource:0}: Error finding container 08d678588a91fedaa50b05baac41cdac2d5c0355efa6380e596d1d26d3cd8ee4: Status 404 returned error can't find the container with id 08d678588a91fedaa50b05baac41cdac2d5c0355efa6380e596d1d26d3cd8ee4 Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.351155 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqdb8" event={"ID":"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3","Type":"ContainerStarted","Data":"08d678588a91fedaa50b05baac41cdac2d5c0355efa6380e596d1d26d3cd8ee4"} Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.389316 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psnw2"] Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.410316 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psnw2"] Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.410477 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.413429 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.533899 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n96\" (UniqueName: \"kubernetes.io/projected/2d107578-4c5d-4271-a1a7-660aadfab0d1-kube-api-access-57n96\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.533978 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-utilities\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.534021 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-catalog-content\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.635748 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-catalog-content\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.635813 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57n96\" (UniqueName: \"kubernetes.io/projected/2d107578-4c5d-4271-a1a7-660aadfab0d1-kube-api-access-57n96\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.635857 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-utilities\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.636300 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-catalog-content\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.636341 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d107578-4c5d-4271-a1a7-660aadfab0d1-utilities\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.656479 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57n96\" (UniqueName: \"kubernetes.io/projected/2d107578-4c5d-4271-a1a7-660aadfab0d1-kube-api-access-57n96\") pod \"certified-operators-psnw2\" (UID: \"2d107578-4c5d-4271-a1a7-660aadfab0d1\") " pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:39 crc kubenswrapper[5130]: I1212 16:20:39.739121 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.118374 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psnw2"] Dec 12 16:20:40 crc kubenswrapper[5130]: W1212 16:20:40.124835 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d107578_4c5d_4271_a1a7_660aadfab0d1.slice/crio-7773d79f7edba7c2fade19500e032cb8eda2fddefa0dffa30bcd136741f76b43 WatchSource:0}: Error finding container 7773d79f7edba7c2fade19500e032cb8eda2fddefa0dffa30bcd136741f76b43: Status 404 returned error can't find the container with id 7773d79f7edba7c2fade19500e032cb8eda2fddefa0dffa30bcd136741f76b43 Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.359975 5130 generic.go:358] "Generic (PLEG): container finished" podID="c82ddae8-4dc3-4d48-96b1-cd9613cc32c3" containerID="2c92758518c7ec2d3d73c6a96563ce070d169d79bc7b889121b6257346da228e" exitCode=0 Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.360125 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqdb8" event={"ID":"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3","Type":"ContainerDied","Data":"2c92758518c7ec2d3d73c6a96563ce070d169d79bc7b889121b6257346da228e"} Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.367033 5130 generic.go:358] "Generic (PLEG): container finished" podID="2d107578-4c5d-4271-a1a7-660aadfab0d1" containerID="aba2ebce11ef0dd6ee2be4622b0d62542dfda44c48939ced169eb31d491a647a" exitCode=0 Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.367159 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psnw2" event={"ID":"2d107578-4c5d-4271-a1a7-660aadfab0d1","Type":"ContainerDied","Data":"aba2ebce11ef0dd6ee2be4622b0d62542dfda44c48939ced169eb31d491a647a"} Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.367248 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psnw2" event={"ID":"2d107578-4c5d-4271-a1a7-660aadfab0d1","Type":"ContainerStarted","Data":"7773d79f7edba7c2fade19500e032cb8eda2fddefa0dffa30bcd136741f76b43"} Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.791075 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jgv5"] Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.805913 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jgv5"] Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.806063 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.809020 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.955278 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92ss\" (UniqueName: \"kubernetes.io/projected/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-kube-api-access-d92ss\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.955359 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-utilities\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:40 crc kubenswrapper[5130]: I1212 16:20:40.955437 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-catalog-content\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.056938 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-utilities\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.057049 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-catalog-content\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.057105 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d92ss\" (UniqueName: \"kubernetes.io/projected/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-kube-api-access-d92ss\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.057740 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-catalog-content\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.057740 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-utilities\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.077740 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d92ss\" (UniqueName: \"kubernetes.io/projected/0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8-kube-api-access-d92ss\") pod \"community-operators-6jgv5\" (UID: \"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8\") " pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.125660 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.533665 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jgv5"] Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.791985 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.835228 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.835402 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.838611 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.968254 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrcq\" (UniqueName: \"kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.968702 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:41 crc kubenswrapper[5130]: I1212 16:20:41.968797 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.070623 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-slrcq\" (UniqueName: \"kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.070697 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.070737 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.071279 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.071303 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.093704 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrcq\" (UniqueName: \"kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq\") pod \"redhat-marketplace-jkgqd\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.153776 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.385362 5130 generic.go:358] "Generic (PLEG): container finished" podID="0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8" containerID="3d34d9916bfd443f8d16b2827e354ffc8c353ad823d3fd31ec1bedeff55fe62a" exitCode=0 Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.385760 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgv5" event={"ID":"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8","Type":"ContainerDied","Data":"3d34d9916bfd443f8d16b2827e354ffc8c353ad823d3fd31ec1bedeff55fe62a"} Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.385899 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgv5" event={"ID":"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8","Type":"ContainerStarted","Data":"b765b5c8351a8fb93680585f3e7f0cfc2b1c43781870e36adc047f36c6ef9bf0"} Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.399451 5130 generic.go:358] "Generic (PLEG): container finished" podID="c82ddae8-4dc3-4d48-96b1-cd9613cc32c3" containerID="72cda219991962352a2735196796a69221026c4f70dcd93799d5127bd09ac7c3" exitCode=0 Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.399930 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqdb8" event={"ID":"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3","Type":"ContainerDied","Data":"72cda219991962352a2735196796a69221026c4f70dcd93799d5127bd09ac7c3"} Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.411363 5130 generic.go:358] "Generic (PLEG): container finished" podID="2d107578-4c5d-4271-a1a7-660aadfab0d1" containerID="440123027cd5fc948a52503e61a29eec061eac5e68c36afa1dd49eed510aa0fc" exitCode=0 Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.411507 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psnw2" event={"ID":"2d107578-4c5d-4271-a1a7-660aadfab0d1","Type":"ContainerDied","Data":"440123027cd5fc948a52503e61a29eec061eac5e68c36afa1dd49eed510aa0fc"} Dec 12 16:20:42 crc kubenswrapper[5130]: I1212 16:20:42.577226 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:20:42 crc kubenswrapper[5130]: W1212 16:20:42.582334 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5307a6d2_3f00_4ebd_8c7b_e101e24f4dd4.slice/crio-8bea72c104dd5234c4cd3783a470a5fd8615adb812871de1a18d6c25aed0610e WatchSource:0}: Error finding container 8bea72c104dd5234c4cd3783a470a5fd8615adb812871de1a18d6c25aed0610e: Status 404 returned error can't find the container with id 8bea72c104dd5234c4cd3783a470a5fd8615adb812871de1a18d6c25aed0610e Dec 12 16:20:43 crc kubenswrapper[5130]: I1212 16:20:43.419905 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqdb8" event={"ID":"c82ddae8-4dc3-4d48-96b1-cd9613cc32c3","Type":"ContainerStarted","Data":"88a8e2caf04fccc622081cbb844b070ddeab0272347d81616b8e491be1358498"} Dec 12 16:20:43 crc kubenswrapper[5130]: I1212 16:20:43.422633 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psnw2" event={"ID":"2d107578-4c5d-4271-a1a7-660aadfab0d1","Type":"ContainerStarted","Data":"b2776f25233c58239242a0cd3c3bafeb8c000717bc98cd8750211f2667795474"} Dec 12 16:20:43 crc kubenswrapper[5130]: I1212 16:20:43.426496 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerStarted","Data":"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212"} Dec 12 16:20:43 crc kubenswrapper[5130]: I1212 16:20:43.426535 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerStarted","Data":"8bea72c104dd5234c4cd3783a470a5fd8615adb812871de1a18d6c25aed0610e"} Dec 12 16:20:43 crc kubenswrapper[5130]: I1212 16:20:43.441877 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wqdb8" podStartSLOduration=4.154486087 podStartE2EDuration="5.441859715s" podCreationTimestamp="2025-12-12 16:20:38 +0000 UTC" firstStartedPulling="2025-12-12 16:20:40.361095928 +0000 UTC m=+340.258770760" lastFinishedPulling="2025-12-12 16:20:41.648469556 +0000 UTC m=+341.546144388" observedRunningTime="2025-12-12 16:20:43.438716413 +0000 UTC m=+343.336391245" watchObservedRunningTime="2025-12-12 16:20:43.441859715 +0000 UTC m=+343.339534547" Dec 12 16:20:44 crc kubenswrapper[5130]: I1212 16:20:44.446424 5130 generic.go:358] "Generic (PLEG): container finished" podID="0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8" containerID="be25656788913ec76a39a32c6a740cc07973a1b898da9044a73a03894dab8c7c" exitCode=0 Dec 12 16:20:44 crc kubenswrapper[5130]: I1212 16:20:44.446737 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgv5" event={"ID":"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8","Type":"ContainerDied","Data":"be25656788913ec76a39a32c6a740cc07973a1b898da9044a73a03894dab8c7c"} Dec 12 16:20:44 crc kubenswrapper[5130]: I1212 16:20:44.449247 5130 generic.go:358] "Generic (PLEG): container finished" podID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerID="816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212" exitCode=0 Dec 12 16:20:44 crc kubenswrapper[5130]: I1212 16:20:44.449300 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerDied","Data":"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212"} Dec 12 16:20:44 crc kubenswrapper[5130]: I1212 16:20:44.494899 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psnw2" podStartSLOduration=4.575758768 podStartE2EDuration="5.494871676s" podCreationTimestamp="2025-12-12 16:20:39 +0000 UTC" firstStartedPulling="2025-12-12 16:20:40.368141393 +0000 UTC m=+340.265816225" lastFinishedPulling="2025-12-12 16:20:41.287254301 +0000 UTC m=+341.184929133" observedRunningTime="2025-12-12 16:20:44.492532124 +0000 UTC m=+344.390206966" watchObservedRunningTime="2025-12-12 16:20:44.494871676 +0000 UTC m=+344.392546508" Dec 12 16:20:45 crc kubenswrapper[5130]: I1212 16:20:45.458314 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgv5" event={"ID":"0b3a2ae2-26c9-4d3a-8ea3-af2fc0de40d8","Type":"ContainerStarted","Data":"3a3e39c37ad656af8585aa9b22707025eed99e9cbe462db2b3c8c180acc397a5"} Dec 12 16:20:45 crc kubenswrapper[5130]: I1212 16:20:45.480068 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jgv5" podStartSLOduration=4.16411376 podStartE2EDuration="5.480045037s" podCreationTimestamp="2025-12-12 16:20:40 +0000 UTC" firstStartedPulling="2025-12-12 16:20:42.386581485 +0000 UTC m=+342.284256317" lastFinishedPulling="2025-12-12 16:20:43.702512762 +0000 UTC m=+343.600187594" observedRunningTime="2025-12-12 16:20:45.478074885 +0000 UTC m=+345.375749727" watchObservedRunningTime="2025-12-12 16:20:45.480045037 +0000 UTC m=+345.377719869" Dec 12 16:20:46 crc kubenswrapper[5130]: I1212 16:20:46.420655 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:20:46 crc kubenswrapper[5130]: I1212 16:20:46.421500 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" podUID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" containerName="route-controller-manager" containerID="cri-o://8fe3222073eba01c686e68480538e777dc4f9e27f3286132426020a2f9728e94" gracePeriod=30 Dec 12 16:20:46 crc kubenswrapper[5130]: I1212 16:20:46.466553 5130 generic.go:358] "Generic (PLEG): container finished" podID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerID="8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6" exitCode=0 Dec 12 16:20:46 crc kubenswrapper[5130]: I1212 16:20:46.466597 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerDied","Data":"8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6"} Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.479087 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerStarted","Data":"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957"} Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.481884 5130 generic.go:358] "Generic (PLEG): container finished" podID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" containerID="8fe3222073eba01c686e68480538e777dc4f9e27f3286132426020a2f9728e94" exitCode=0 Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.481967 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" event={"ID":"1fad0dc5-4596-4305-9545-f2525bf2a5f6","Type":"ContainerDied","Data":"8fe3222073eba01c686e68480538e777dc4f9e27f3286132426020a2f9728e94"} Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.499757 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jkgqd" podStartSLOduration=5.460172624 podStartE2EDuration="6.499724012s" podCreationTimestamp="2025-12-12 16:20:41 +0000 UTC" firstStartedPulling="2025-12-12 16:20:44.450094551 +0000 UTC m=+344.347769383" lastFinishedPulling="2025-12-12 16:20:45.489645939 +0000 UTC m=+345.387320771" observedRunningTime="2025-12-12 16:20:47.498938021 +0000 UTC m=+347.396612853" watchObservedRunningTime="2025-12-12 16:20:47.499724012 +0000 UTC m=+347.397398844" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.735543 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.767098 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh"] Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.767768 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" containerName="route-controller-manager" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.767788 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" containerName="route-controller-manager" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.767898 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" containerName="route-controller-manager" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.823124 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh"] Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.823342 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.843132 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert\") pod \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.843296 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q8n7\" (UniqueName: \"kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7\") pod \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.843482 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config\") pod \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.843627 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp\") pod \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.843677 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca\") pod \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\" (UID: \"1fad0dc5-4596-4305-9545-f2525bf2a5f6\") " Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.844008 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp" (OuterVolumeSpecName: "tmp") pod "1fad0dc5-4596-4305-9545-f2525bf2a5f6" (UID: "1fad0dc5-4596-4305-9545-f2525bf2a5f6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.844219 5130 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1fad0dc5-4596-4305-9545-f2525bf2a5f6-tmp\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.844393 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config" (OuterVolumeSpecName: "config") pod "1fad0dc5-4596-4305-9545-f2525bf2a5f6" (UID: "1fad0dc5-4596-4305-9545-f2525bf2a5f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.844407 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "1fad0dc5-4596-4305-9545-f2525bf2a5f6" (UID: "1fad0dc5-4596-4305-9545-f2525bf2a5f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.849502 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1fad0dc5-4596-4305-9545-f2525bf2a5f6" (UID: "1fad0dc5-4596-4305-9545-f2525bf2a5f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.853201 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7" (OuterVolumeSpecName: "kube-api-access-8q8n7") pod "1fad0dc5-4596-4305-9545-f2525bf2a5f6" (UID: "1fad0dc5-4596-4305-9545-f2525bf2a5f6"). InnerVolumeSpecName "kube-api-access-8q8n7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945652 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99xm\" (UniqueName: \"kubernetes.io/projected/952b1cf6-a983-4b00-bca6-24b95d6bff57-kube-api-access-c99xm\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945720 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952b1cf6-a983-4b00-bca6-24b95d6bff57-serving-cert\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945744 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-client-ca\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945775 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-config\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945791 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/952b1cf6-a983-4b00-bca6-24b95d6bff57-tmp\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945893 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8q8n7\" (UniqueName: \"kubernetes.io/projected/1fad0dc5-4596-4305-9545-f2525bf2a5f6-kube-api-access-8q8n7\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945906 5130 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945916 5130 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fad0dc5-4596-4305-9545-f2525bf2a5f6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:47 crc kubenswrapper[5130]: I1212 16:20:47.945925 5130 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fad0dc5-4596-4305-9545-f2525bf2a5f6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.047478 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c99xm\" (UniqueName: \"kubernetes.io/projected/952b1cf6-a983-4b00-bca6-24b95d6bff57-kube-api-access-c99xm\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.047567 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952b1cf6-a983-4b00-bca6-24b95d6bff57-serving-cert\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.047592 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-client-ca\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.047616 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-config\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.047768 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/952b1cf6-a983-4b00-bca6-24b95d6bff57-tmp\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.052989 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-client-ca\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.053800 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/952b1cf6-a983-4b00-bca6-24b95d6bff57-serving-cert\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.048403 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/952b1cf6-a983-4b00-bca6-24b95d6bff57-tmp\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.054550 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/952b1cf6-a983-4b00-bca6-24b95d6bff57-config\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.081483 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99xm\" (UniqueName: \"kubernetes.io/projected/952b1cf6-a983-4b00-bca6-24b95d6bff57-kube-api-access-c99xm\") pod \"route-controller-manager-8fdcdbb66-mzfqh\" (UID: \"952b1cf6-a983-4b00-bca6-24b95d6bff57\") " pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.143717 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.489147 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.489144 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt" event={"ID":"1fad0dc5-4596-4305-9545-f2525bf2a5f6","Type":"ContainerDied","Data":"a617fc7065f1a27b47bb99a0229d4625e224ad99323bbfe378c7893aeb2e13f9"} Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.489575 5130 scope.go:117] "RemoveContainer" containerID="8fe3222073eba01c686e68480538e777dc4f9e27f3286132426020a2f9728e94" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.517294 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.521585 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf6bf5794-d5zzt"] Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.579130 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh"] Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.749248 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.749605 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:48 crc kubenswrapper[5130]: I1212 16:20:48.795661 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.497775 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" event={"ID":"952b1cf6-a983-4b00-bca6-24b95d6bff57","Type":"ContainerStarted","Data":"083b21f942f9ed72a595d3826b93065969c3923e455a62d5a78277427a001448"} Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.498348 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.498388 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" event={"ID":"952b1cf6-a983-4b00-bca6-24b95d6bff57","Type":"ContainerStarted","Data":"67ac35c04ae5d5bd39d34e7ec55083a9d5fce60d8261f2f593679d0ef3030a1f"} Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.519970 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" podStartSLOduration=3.519948051 podStartE2EDuration="3.519948051s" podCreationTimestamp="2025-12-12 16:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:20:49.51610054 +0000 UTC m=+349.413775392" watchObservedRunningTime="2025-12-12 16:20:49.519948051 +0000 UTC m=+349.417622883" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.543646 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wqdb8" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.571082 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8fdcdbb66-mzfqh" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.740470 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.742887 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:49 crc kubenswrapper[5130]: I1212 16:20:49.781804 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:50 crc kubenswrapper[5130]: I1212 16:20:50.379405 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fad0dc5-4596-4305-9545-f2525bf2a5f6" path="/var/lib/kubelet/pods/1fad0dc5-4596-4305-9545-f2525bf2a5f6/volumes" Dec 12 16:20:50 crc kubenswrapper[5130]: I1212 16:20:50.543880 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psnw2" Dec 12 16:20:51 crc kubenswrapper[5130]: I1212 16:20:51.126508 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:51 crc kubenswrapper[5130]: I1212 16:20:51.126898 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:51 crc kubenswrapper[5130]: I1212 16:20:51.173085 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:51 crc kubenswrapper[5130]: I1212 16:20:51.548310 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jgv5" Dec 12 16:20:52 crc kubenswrapper[5130]: I1212 16:20:52.154916 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:52 crc kubenswrapper[5130]: I1212 16:20:52.154982 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:52 crc kubenswrapper[5130]: I1212 16:20:52.193908 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:20:52 crc kubenswrapper[5130]: I1212 16:20:52.557842 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:21:52 crc kubenswrapper[5130]: I1212 16:21:52.730374 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:21:52 crc kubenswrapper[5130]: I1212 16:21:52.732302 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:00 crc kubenswrapper[5130]: I1212 16:22:00.726466 5130 scope.go:117] "RemoveContainer" containerID="3f84b80c2f32e68a8eb79916fece466ce160a92d4d9b989d1bfd37673b951c48" Dec 12 16:22:00 crc kubenswrapper[5130]: I1212 16:22:00.745135 5130 scope.go:117] "RemoveContainer" containerID="818cbab9fa2109ab2203469a2d7999f6b39f7f70722424aa9e78038d779eb741" Dec 12 16:22:00 crc kubenswrapper[5130]: I1212 16:22:00.766831 5130 scope.go:117] "RemoveContainer" containerID="f1a01912ddee091b284981f73500faf3fcfd7a1071596baf5cd12e42fadf2802" Dec 12 16:22:00 crc kubenswrapper[5130]: I1212 16:22:00.787262 5130 scope.go:117] "RemoveContainer" containerID="6dba3c0695675d41f391363533d51f6311cd8233a6619881a3913b8726c0f824" Dec 12 16:22:00 crc kubenswrapper[5130]: I1212 16:22:00.844382 5130 scope.go:117] "RemoveContainer" containerID="96c12daa01120f19be833f82d5f8c18b27d7dc4c74ac5543dd248efa1a9301d1" Dec 12 16:22:22 crc kubenswrapper[5130]: I1212 16:22:22.730580 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:22:22 crc kubenswrapper[5130]: I1212 16:22:22.731002 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:52 crc kubenswrapper[5130]: I1212 16:22:52.730152 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:22:52 crc kubenswrapper[5130]: I1212 16:22:52.731301 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:22:52 crc kubenswrapper[5130]: I1212 16:22:52.731382 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:22:52 crc kubenswrapper[5130]: I1212 16:22:52.732408 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:22:52 crc kubenswrapper[5130]: I1212 16:22:52.732491 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668" gracePeriod=600 Dec 12 16:22:53 crc kubenswrapper[5130]: I1212 16:22:53.390697 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668" exitCode=0 Dec 12 16:22:53 crc kubenswrapper[5130]: I1212 16:22:53.390809 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668"} Dec 12 16:22:53 crc kubenswrapper[5130]: I1212 16:22:53.390903 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd"} Dec 12 16:22:53 crc kubenswrapper[5130]: I1212 16:22:53.390931 5130 scope.go:117] "RemoveContainer" containerID="945d8bb14b5e6a98fa9e0d91e099375cda051376ad0d1a72bc65b3cc8a701a5f" Dec 12 16:23:00 crc kubenswrapper[5130]: I1212 16:23:00.873958 5130 scope.go:117] "RemoveContainer" containerID="fb358025eb77871c75cb9b40f8c7bc36aebb9927910b33781e814fb8ac191a85" Dec 12 16:24:54 crc kubenswrapper[5130]: I1212 16:24:54.233204 5130 ???:1] "http: TLS handshake error from 192.168.126.11:56934: no serving certificate available for the kubelet" Dec 12 16:25:00 crc kubenswrapper[5130]: I1212 16:25:00.607501 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:25:00 crc kubenswrapper[5130]: I1212 16:25:00.607706 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:25:22 crc kubenswrapper[5130]: I1212 16:25:22.730915 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:25:22 crc kubenswrapper[5130]: I1212 16:25:22.732194 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:25:26 crc kubenswrapper[5130]: I1212 16:25:26.874070 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr"] Dec 12 16:25:26 crc kubenswrapper[5130]: I1212 16:25:26.875556 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="kube-rbac-proxy" containerID="cri-o://8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" gracePeriod=30 Dec 12 16:25:26 crc kubenswrapper[5130]: I1212 16:25:26.876263 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="ovnkube-cluster-manager" containerID="cri-o://6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.089012 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.100789 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wjw4g"] Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101438 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="nbdb" containerID="cri-o://3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101437 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="sbdb" containerID="cri-o://d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101455 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-controller" containerID="cri-o://bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101601 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="northd" containerID="cri-o://e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101626 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-acl-logging" containerID="cri-o://6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101634 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.101682 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-node" containerID="cri-o://66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.112534 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config\") pod \"93aaac8c-bbe8-4744-9151-f486341fc9e8\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.114424 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "93aaac8c-bbe8-4744-9151-f486341fc9e8" (UID: "93aaac8c-bbe8-4744-9151-f486341fc9e8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.114542 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5sn6\" (UniqueName: \"kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6\") pod \"93aaac8c-bbe8-4744-9151-f486341fc9e8\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.114899 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert\") pod \"93aaac8c-bbe8-4744-9151-f486341fc9e8\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.115735 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides\") pod \"93aaac8c-bbe8-4744-9151-f486341fc9e8\" (UID: \"93aaac8c-bbe8-4744-9151-f486341fc9e8\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.116223 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "93aaac8c-bbe8-4744-9151-f486341fc9e8" (UID: "93aaac8c-bbe8-4744-9151-f486341fc9e8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.116959 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.116995 5130 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93aaac8c-bbe8-4744-9151-f486341fc9e8-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125111 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh"] Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125693 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="ovnkube-cluster-manager" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125707 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="ovnkube-cluster-manager" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125725 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="kube-rbac-proxy" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125732 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="kube-rbac-proxy" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125845 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="kube-rbac-proxy" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.125862 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerName="ovnkube-cluster-manager" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.131196 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.133578 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6" (OuterVolumeSpecName: "kube-api-access-s5sn6") pod "93aaac8c-bbe8-4744-9151-f486341fc9e8" (UID: "93aaac8c-bbe8-4744-9151-f486341fc9e8"). InnerVolumeSpecName "kube-api-access-s5sn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.133649 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "93aaac8c-bbe8-4744-9151-f486341fc9e8" (UID: "93aaac8c-bbe8-4744-9151-f486341fc9e8"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.140971 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovnkube-controller" containerID="cri-o://9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3" gracePeriod=30 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220124 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220284 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220311 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220326 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp4kb\" (UniqueName: \"kubernetes.io/projected/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-kube-api-access-kp4kb\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220373 5130 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93aaac8c-bbe8-4744-9151-f486341fc9e8-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.220386 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5sn6\" (UniqueName: \"kubernetes.io/projected/93aaac8c-bbe8-4744-9151-f486341fc9e8-kube-api-access-s5sn6\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.321888 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.321954 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.321975 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.321994 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kp4kb\" (UniqueName: \"kubernetes.io/projected/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-kube-api-access-kp4kb\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.322607 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.322776 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.331791 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.340991 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp4kb\" (UniqueName: \"kubernetes.io/projected/9dfc6a17-c67e-4928-96ac-f36d2ba8aac9-kube-api-access-kp4kb\") pod \"ovnkube-control-plane-97c9b6c48-w5wsh\" (UID: \"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: E1212 16:25:27.350814 5130 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e1069d_2de7_4735_9056_84d955d960e2.slice/crio-3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e1069d_2de7_4735_9056_84d955d960e2.slice/crio-d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e1069d_2de7_4735_9056_84d955d960e2.slice/crio-conmon-d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce.scope\": RecentStats: unable to find data in memory cache]" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370434 5130 generic.go:358] "Generic (PLEG): container finished" podID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerID="6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370475 5130 generic.go:358] "Generic (PLEG): container finished" podID="93aaac8c-bbe8-4744-9151-f486341fc9e8" containerID="8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370733 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerDied","Data":"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370848 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerDied","Data":"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370869 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" event={"ID":"93aaac8c-bbe8-4744-9151-f486341fc9e8","Type":"ContainerDied","Data":"ba06b437859831a4ba5b19dd77097aa461f0e5204c92fa041860480150a422a6"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.370914 5130 scope.go:117] "RemoveContainer" containerID="6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.371006 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.381598 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-acl-logging/0.log" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382204 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-controller/0.log" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382694 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382781 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382842 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382904 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382967 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e" exitCode=0 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383027 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252" exitCode=143 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383080 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b" exitCode=143 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.382795 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383244 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383266 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383278 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383291 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383306 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.383317 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.385537 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.385589 5130 generic.go:358] "Generic (PLEG): container finished" podID="6625166c-6688-498a-81c5-89ec476edef2" containerID="afec02ecdbcab7dac8db37c3a4ff38d4b68bab32ea1f47c40b0bb4f77a533698" exitCode=2 Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.385650 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzhgf" event={"ID":"6625166c-6688-498a-81c5-89ec476edef2","Type":"ContainerDied","Data":"afec02ecdbcab7dac8db37c3a4ff38d4b68bab32ea1f47c40b0bb4f77a533698"} Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.386387 5130 scope.go:117] "RemoveContainer" containerID="afec02ecdbcab7dac8db37c3a4ff38d4b68bab32ea1f47c40b0bb4f77a533698" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.387995 5130 scope.go:117] "RemoveContainer" containerID="8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.414508 5130 scope.go:117] "RemoveContainer" containerID="6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" Dec 12 16:25:27 crc kubenswrapper[5130]: E1212 16:25:27.415073 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012\": container with ID starting with 6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012 not found: ID does not exist" containerID="6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.415116 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012"} err="failed to get container status \"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012\": rpc error: code = NotFound desc = could not find container \"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012\": container with ID starting with 6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012 not found: ID does not exist" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.415142 5130 scope.go:117] "RemoveContainer" containerID="8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" Dec 12 16:25:27 crc kubenswrapper[5130]: E1212 16:25:27.415587 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98\": container with ID starting with 8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98 not found: ID does not exist" containerID="8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.415637 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98"} err="failed to get container status \"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98\": rpc error: code = NotFound desc = could not find container \"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98\": container with ID starting with 8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98 not found: ID does not exist" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.415664 5130 scope.go:117] "RemoveContainer" containerID="6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.416404 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012"} err="failed to get container status \"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012\": rpc error: code = NotFound desc = could not find container \"6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012\": container with ID starting with 6d5ece37e09013374ef73ba71f75cbf2d2fdb4ef7845691f3c9193f82ec51012 not found: ID does not exist" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.416467 5130 scope.go:117] "RemoveContainer" containerID="8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.416841 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98"} err="failed to get container status \"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98\": rpc error: code = NotFound desc = could not find container \"8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98\": container with ID starting with 8077904b278e4e6829733d13cb548b022e502dcec54af194d5a0d5cfea4fbe98 not found: ID does not exist" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.424834 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr"] Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.431424 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xtrkr"] Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.461331 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-acl-logging/0.log" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.461859 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-controller/0.log" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.462417 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.467408 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.522156 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4pkx2"] Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.524824 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.524861 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.524907 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.524947 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh5qz\" (UniqueName: \"kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.524977 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525001 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525046 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525086 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kubecfg-setup" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525153 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kubecfg-setup" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525191 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525204 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525218 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="nbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525226 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="nbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525242 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="northd" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525249 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="northd" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525270 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovnkube-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525278 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovnkube-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525292 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525299 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525313 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-acl-logging" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525323 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-acl-logging" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525332 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-node" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525340 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-node" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525350 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="sbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525359 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="sbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525511 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="sbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525523 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="nbdb" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525537 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="northd" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525547 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-acl-logging" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525556 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovn-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525568 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-node" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525579 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="kube-rbac-proxy-ovn-metrics" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525590 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" containerName="ovnkube-controller" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525119 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526077 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526102 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526124 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526159 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526209 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526250 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526277 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526295 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526323 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526342 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526386 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526433 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert\") pod \"b8e1069d-2de7-4735-9056-84d955d960e2\" (UID: \"b8e1069d-2de7-4735-9056-84d955d960e2\") " Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525173 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525215 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529363 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529431 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525868 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525895 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.526030 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log" (OuterVolumeSpecName: "node-log") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529198 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529234 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529259 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket" (OuterVolumeSpecName: "log-socket") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529294 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529324 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529323 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529350 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash" (OuterVolumeSpecName: "host-slash") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.525236 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529714 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.529854 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.531867 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.532514 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.534097 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz" (OuterVolumeSpecName: "kube-api-access-dh5qz") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "kube-api-access-dh5qz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.554220 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b8e1069d-2de7-4735-9056-84d955d960e2" (UID: "b8e1069d-2de7-4735-9056-84d955d960e2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628523 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-config\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628584 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-node-log\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628613 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79c3e1a9-4077-41cc-8987-8284d900106c-ovn-node-metrics-cert\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628759 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-script-lib\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628902 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628931 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6fl\" (UniqueName: \"kubernetes.io/projected/79c3e1a9-4077-41cc-8987-8284d900106c-kube-api-access-jh6fl\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.628976 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-kubelet\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629006 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-systemd-units\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629033 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-log-socket\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629094 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-ovn\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629196 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-netns\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629226 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-systemd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629348 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-etc-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629399 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-var-lib-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629424 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-slash\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629453 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-env-overrides\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629546 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-bin\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629599 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629629 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629663 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-netd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629803 5130 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629823 5130 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629833 5130 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-log-socket\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629843 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629854 5130 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629863 5130 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629873 5130 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629883 5130 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629893 5130 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629903 5130 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629914 5130 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629924 5130 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629936 5130 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8e1069d-2de7-4735-9056-84d955d960e2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629947 5130 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-node-log\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629957 5130 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629966 5130 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-slash\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629974 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dh5qz\" (UniqueName: \"kubernetes.io/projected/b8e1069d-2de7-4735-9056-84d955d960e2-kube-api-access-dh5qz\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629984 5130 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.629995 5130 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8e1069d-2de7-4735-9056-84d955d960e2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.630013 5130 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8e1069d-2de7-4735-9056-84d955d960e2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730774 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-netns\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730806 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-systemd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730829 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-etc-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730849 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-var-lib-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730866 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-slash\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730889 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-env-overrides\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730908 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-bin\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730933 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-netns\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730981 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731019 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-systemd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731041 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-etc-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731062 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-var-lib-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.730939 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731087 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731126 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-slash\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731145 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-netd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731194 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-config\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731219 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-node-log\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731237 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79c3e1a9-4077-41cc-8987-8284d900106c-ovn-node-metrics-cert\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731256 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-script-lib\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731279 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731294 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6fl\" (UniqueName: \"kubernetes.io/projected/79c3e1a9-4077-41cc-8987-8284d900106c-kube-api-access-jh6fl\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731345 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-kubelet\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731361 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-systemd-units\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731377 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-log-socket\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731397 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-ovn\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731438 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-ovn\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731472 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-run-ovn-kubernetes\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731496 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-bin\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731515 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-cni-netd\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731700 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-run-openvswitch\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731818 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-host-kubelet\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.731871 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-node-log\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.732052 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-systemd-units\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.732080 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/79c3e1a9-4077-41cc-8987-8284d900106c-log-socket\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.732393 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-env-overrides\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.732447 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-config\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.732510 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/79c3e1a9-4077-41cc-8987-8284d900106c-ovnkube-script-lib\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.748205 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79c3e1a9-4077-41cc-8987-8284d900106c-ovn-node-metrics-cert\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.751936 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6fl\" (UniqueName: \"kubernetes.io/projected/79c3e1a9-4077-41cc-8987-8284d900106c-kube-api-access-jh6fl\") pod \"ovnkube-node-4pkx2\" (UID: \"79c3e1a9-4077-41cc-8987-8284d900106c\") " pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: I1212 16:25:27.848408 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:27 crc kubenswrapper[5130]: W1212 16:25:27.871608 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79c3e1a9_4077_41cc_8987_8284d900106c.slice/crio-5cb782dab94969faf4ef058adcbd291cf428d0ebddb7223f76310d65f581b881 WatchSource:0}: Error finding container 5cb782dab94969faf4ef058adcbd291cf428d0ebddb7223f76310d65f581b881: Status 404 returned error can't find the container with id 5cb782dab94969faf4ef058adcbd291cf428d0ebddb7223f76310d65f581b881 Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.378127 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93aaac8c-bbe8-4744-9151-f486341fc9e8" path="/var/lib/kubelet/pods/93aaac8c-bbe8-4744-9151-f486341fc9e8/volumes" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.406172 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" event={"ID":"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9","Type":"ContainerStarted","Data":"1ab25348dce4226025d5c891b5306949b49667d1237f5dfe2ac6f135d061b37a"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.406369 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" event={"ID":"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9","Type":"ContainerStarted","Data":"8e8b1ab37588a62b0bc07a66def17ffa12d80309a15173ae25f54eae67950485"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.406505 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" event={"ID":"9dfc6a17-c67e-4928-96ac-f36d2ba8aac9","Type":"ContainerStarted","Data":"25a051d971a6d98fe90ddc3e621e6235ced1a9f4bf9a70a37c44f385a8d622bc"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.407648 5130 generic.go:358] "Generic (PLEG): container finished" podID="79c3e1a9-4077-41cc-8987-8284d900106c" containerID="a2fc6af64aec28ff5d0530c549c96e1b0fccba0bb719b1adb5fefee968ec3b51" exitCode=0 Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.407732 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerDied","Data":"a2fc6af64aec28ff5d0530c549c96e1b0fccba0bb719b1adb5fefee968ec3b51"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.407774 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"5cb782dab94969faf4ef058adcbd291cf428d0ebddb7223f76310d65f581b881"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.419624 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-acl-logging/0.log" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.420012 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wjw4g_b8e1069d-2de7-4735-9056-84d955d960e2/ovn-controller/0.log" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.420371 5130 generic.go:358] "Generic (PLEG): container finished" podID="b8e1069d-2de7-4735-9056-84d955d960e2" containerID="e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678" exitCode=0 Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.420486 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.421065 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.421065 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wjw4g" event={"ID":"b8e1069d-2de7-4735-9056-84d955d960e2","Type":"ContainerDied","Data":"9772d84159436903e9c630cf5be836e1db70ed160ca3edf443a5851baa0aed8a"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.421083 5130 scope.go:117] "RemoveContainer" containerID="9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.430921 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-w5wsh" podStartSLOduration=2.430877903 podStartE2EDuration="2.430877903s" podCreationTimestamp="2025-12-12 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:25:28.424972794 +0000 UTC m=+628.322647636" watchObservedRunningTime="2025-12-12 16:25:28.430877903 +0000 UTC m=+628.328552735" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.444370 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.444641 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rzhgf" event={"ID":"6625166c-6688-498a-81c5-89ec476edef2","Type":"ContainerStarted","Data":"cbe38233e51547890d0edd707b23931bcb8fbb046d90ac72c79f0ba3b0a7bede"} Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.458890 5130 scope.go:117] "RemoveContainer" containerID="d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.483706 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wjw4g"] Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.483762 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wjw4g"] Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.504410 5130 scope.go:117] "RemoveContainer" containerID="3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.521153 5130 scope.go:117] "RemoveContainer" containerID="e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.561136 5130 scope.go:117] "RemoveContainer" containerID="3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.575230 5130 scope.go:117] "RemoveContainer" containerID="66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.591459 5130 scope.go:117] "RemoveContainer" containerID="6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.607686 5130 scope.go:117] "RemoveContainer" containerID="bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.624857 5130 scope.go:117] "RemoveContainer" containerID="69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.645674 5130 scope.go:117] "RemoveContainer" containerID="9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.647602 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3\": container with ID starting with 9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3 not found: ID does not exist" containerID="9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.647656 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3"} err="failed to get container status \"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3\": rpc error: code = NotFound desc = could not find container \"9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3\": container with ID starting with 9f4cae3905d7dfcf5bed8c2ecdb906bea33ea8cda901a544bc68d0cbf648f1a3 not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.647694 5130 scope.go:117] "RemoveContainer" containerID="d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.648297 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce\": container with ID starting with d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce not found: ID does not exist" containerID="d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.648342 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce"} err="failed to get container status \"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce\": rpc error: code = NotFound desc = could not find container \"d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce\": container with ID starting with d34e34bafae9b32b4ad2c92c1f6291cb1ae8aeb7bbaaec632bca6f593f4714ce not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.648372 5130 scope.go:117] "RemoveContainer" containerID="3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.648760 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab\": container with ID starting with 3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab not found: ID does not exist" containerID="3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.648793 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab"} err="failed to get container status \"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab\": rpc error: code = NotFound desc = could not find container \"3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab\": container with ID starting with 3bf5c0519e6f79981cda5c3b44c4771a37b388c67da4acf49dafcc017a07aeab not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.648814 5130 scope.go:117] "RemoveContainer" containerID="e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.649142 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678\": container with ID starting with e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678 not found: ID does not exist" containerID="e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.649194 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678"} err="failed to get container status \"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678\": rpc error: code = NotFound desc = could not find container \"e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678\": container with ID starting with e8d4e95518ff1d5139a4ee57dbcbfe036d48a417f5911cf0a63a0a05f87be678 not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.649220 5130 scope.go:117] "RemoveContainer" containerID="3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.649570 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f\": container with ID starting with 3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f not found: ID does not exist" containerID="3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.649645 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f"} err="failed to get container status \"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f\": rpc error: code = NotFound desc = could not find container \"3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f\": container with ID starting with 3a2c6da7494b0f067b5a4ca0c9bd288fbd2a57f762a469dd7135b4f821ba157f not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.649674 5130 scope.go:117] "RemoveContainer" containerID="66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.650094 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e\": container with ID starting with 66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e not found: ID does not exist" containerID="66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.650161 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e"} err="failed to get container status \"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e\": rpc error: code = NotFound desc = could not find container \"66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e\": container with ID starting with 66825dab4b0efeb8cd1fc0fb55cf5335b5badbce38941f28b9afda0e37dbde1e not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.650229 5130 scope.go:117] "RemoveContainer" containerID="6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.650743 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252\": container with ID starting with 6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252 not found: ID does not exist" containerID="6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.650777 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252"} err="failed to get container status \"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252\": rpc error: code = NotFound desc = could not find container \"6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252\": container with ID starting with 6717af8aefa0fb00d5f76afca66eab9723939dbf058012f814546271d2440252 not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.650793 5130 scope.go:117] "RemoveContainer" containerID="bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.651136 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b\": container with ID starting with bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b not found: ID does not exist" containerID="bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.651166 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b"} err="failed to get container status \"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b\": rpc error: code = NotFound desc = could not find container \"bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b\": container with ID starting with bce8b7dc937e2ac83cf802fdeeda354e5cd07728f2626553748456ef30c9b63b not found: ID does not exist" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.651198 5130 scope.go:117] "RemoveContainer" containerID="69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a" Dec 12 16:25:28 crc kubenswrapper[5130]: E1212 16:25:28.652255 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a\": container with ID starting with 69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a not found: ID does not exist" containerID="69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a" Dec 12 16:25:28 crc kubenswrapper[5130]: I1212 16:25:28.652280 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a"} err="failed to get container status \"69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a\": rpc error: code = NotFound desc = could not find container \"69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a\": container with ID starting with 69fbed7f95e5ae0156f4fea59fc70af63cb97b8c26a6117dbe8e555c4371ea4a not found: ID does not exist" Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461389 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"7d30039914fadc5592a28be852b32959194bfe3a98295c95b14fda4329715f72"} Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461501 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"893225b0b361c1864190176a5f54cb14ab4cfcc3ef7b27aa094c605a6ab2d400"} Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461524 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"39d111def883371f21f6e5c2dbc1a59b16843a82d8c169a79dd807a75c44db32"} Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461542 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"5553bc768335dd5d98862db3b49aa316b0b053e029a3f80ae9c1e8a3413980d6"} Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461555 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"eefc72e559328a7eb16256617304261112a9ee1d87c3b47116c4c6226bba55cf"} Dec 12 16:25:29 crc kubenswrapper[5130]: I1212 16:25:29.461567 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"52df2526bbfc9a62fd9c01c8cf11371b7dc727d744c6cc99bb88f12cf234a8b0"} Dec 12 16:25:30 crc kubenswrapper[5130]: I1212 16:25:30.385839 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e1069d-2de7-4735-9056-84d955d960e2" path="/var/lib/kubelet/pods/b8e1069d-2de7-4735-9056-84d955d960e2/volumes" Dec 12 16:25:32 crc kubenswrapper[5130]: I1212 16:25:32.489833 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"38e8f16d7bf575b9fd2335c9cd8ab9f976f2166e0d348fb4ca53e95d6bdb66d6"} Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.519586 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" event={"ID":"79c3e1a9-4077-41cc-8987-8284d900106c","Type":"ContainerStarted","Data":"d600fd84a6dd5a673166b086e16cfca1df298cf889c3495fe9409b82c19eece4"} Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.522350 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.522390 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.522404 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.560470 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" podStartSLOduration=9.56043886 podStartE2EDuration="9.56043886s" podCreationTimestamp="2025-12-12 16:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:25:36.556214674 +0000 UTC m=+636.453889506" watchObservedRunningTime="2025-12-12 16:25:36.56043886 +0000 UTC m=+636.458113692" Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.571042 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:36 crc kubenswrapper[5130]: I1212 16:25:36.575073 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:25:52 crc kubenswrapper[5130]: I1212 16:25:52.730907 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:25:52 crc kubenswrapper[5130]: I1212 16:25:52.731929 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:26:08 crc kubenswrapper[5130]: I1212 16:26:08.558584 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4pkx2" Dec 12 16:26:22 crc kubenswrapper[5130]: I1212 16:26:22.730127 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:26:22 crc kubenswrapper[5130]: I1212 16:26:22.730754 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:26:22 crc kubenswrapper[5130]: I1212 16:26:22.730806 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:26:22 crc kubenswrapper[5130]: I1212 16:26:22.731466 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:26:22 crc kubenswrapper[5130]: I1212 16:26:22.731527 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd" gracePeriod=600 Dec 12 16:26:23 crc kubenswrapper[5130]: I1212 16:26:23.491865 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:26:23 crc kubenswrapper[5130]: I1212 16:26:23.829405 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd" exitCode=0 Dec 12 16:26:23 crc kubenswrapper[5130]: I1212 16:26:23.829507 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd"} Dec 12 16:26:23 crc kubenswrapper[5130]: I1212 16:26:23.829922 5130 scope.go:117] "RemoveContainer" containerID="bab2472634bb02da167c93d4ee47778aaec9280425412ea74c819303d8206668" Dec 12 16:26:24 crc kubenswrapper[5130]: I1212 16:26:24.838836 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e"} Dec 12 16:26:38 crc kubenswrapper[5130]: I1212 16:26:38.908077 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:26:38 crc kubenswrapper[5130]: I1212 16:26:38.909676 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jkgqd" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="registry-server" containerID="cri-o://01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957" gracePeriod=30 Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.241674 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.357424 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrcq\" (UniqueName: \"kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq\") pod \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.357612 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content\") pod \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.357662 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities\") pod \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\" (UID: \"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4\") " Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.359054 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities" (OuterVolumeSpecName: "utilities") pod "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" (UID: "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.366959 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq" (OuterVolumeSpecName: "kube-api-access-slrcq") pod "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" (UID: "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4"). InnerVolumeSpecName "kube-api-access-slrcq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.368355 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" (UID: "5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.458876 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-slrcq\" (UniqueName: \"kubernetes.io/projected/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-kube-api-access-slrcq\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.458969 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.458979 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.934982 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-6md9w"] Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936389 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="extract-content" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936411 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="extract-content" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936440 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="extract-utilities" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936448 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="extract-utilities" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936478 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="registry-server" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936486 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="registry-server" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.936606 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerName="registry-server" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.940305 5130 generic.go:358] "Generic (PLEG): container finished" podID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" containerID="01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957" exitCode=0 Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.958917 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerDied","Data":"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957"} Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.958999 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jkgqd" event={"ID":"5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4","Type":"ContainerDied","Data":"8bea72c104dd5234c4cd3783a470a5fd8615adb812871de1a18d6c25aed0610e"} Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.959021 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-6md9w"] Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.959031 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jkgqd" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.959112 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.959060 5130 scope.go:117] "RemoveContainer" containerID="01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957" Dec 12 16:26:39 crc kubenswrapper[5130]: I1212 16:26:39.987255 5130 scope.go:117] "RemoveContainer" containerID="8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.013415 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.020414 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jkgqd"] Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.021487 5130 scope.go:117] "RemoveContainer" containerID="816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.041840 5130 scope.go:117] "RemoveContainer" containerID="01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957" Dec 12 16:26:40 crc kubenswrapper[5130]: E1212 16:26:40.042500 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957\": container with ID starting with 01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957 not found: ID does not exist" containerID="01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.042542 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957"} err="failed to get container status \"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957\": rpc error: code = NotFound desc = could not find container \"01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957\": container with ID starting with 01ef98527c3592dc8174a4215f06a085d178e59f99a4e233f9233d9b25e45957 not found: ID does not exist" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.042574 5130 scope.go:117] "RemoveContainer" containerID="8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6" Dec 12 16:26:40 crc kubenswrapper[5130]: E1212 16:26:40.042941 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6\": container with ID starting with 8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6 not found: ID does not exist" containerID="8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.043021 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6"} err="failed to get container status \"8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6\": rpc error: code = NotFound desc = could not find container \"8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6\": container with ID starting with 8d713eced25dc4ebd226593274284e55bedf8dfa647a3bed76530ec8ce0465f6 not found: ID does not exist" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.043042 5130 scope.go:117] "RemoveContainer" containerID="816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212" Dec 12 16:26:40 crc kubenswrapper[5130]: E1212 16:26:40.043399 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212\": container with ID starting with 816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212 not found: ID does not exist" containerID="816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.043427 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212"} err="failed to get container status \"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212\": rpc error: code = NotFound desc = could not find container \"816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212\": container with ID starting with 816d1fa63990101251f9b52426871cea7b4fcaf220e9ab486b048734c10b2212 not found: ID does not exist" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068366 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b75bc011-274b-4fb1-8311-15ffa1b33366-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068433 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-tls\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068462 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b75bc011-274b-4fb1-8311-15ffa1b33366-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068502 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-certificates\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068519 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fckq9\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-kube-api-access-fckq9\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068542 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-trusted-ca\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068575 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-bound-sa-token\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.068608 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.096476 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.169579 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-certificates\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170157 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fckq9\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-kube-api-access-fckq9\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170209 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-trusted-ca\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170237 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-bound-sa-token\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170276 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b75bc011-274b-4fb1-8311-15ffa1b33366-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170305 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-tls\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.170333 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b75bc011-274b-4fb1-8311-15ffa1b33366-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.171020 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b75bc011-274b-4fb1-8311-15ffa1b33366-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.172075 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-certificates\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.173059 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75bc011-274b-4fb1-8311-15ffa1b33366-trusted-ca\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.176424 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-registry-tls\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.177362 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b75bc011-274b-4fb1-8311-15ffa1b33366-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.189368 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fckq9\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-kube-api-access-fckq9\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.189697 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b75bc011-274b-4fb1-8311-15ffa1b33366-bound-sa-token\") pod \"image-registry-5d9d95bf5b-6md9w\" (UID: \"b75bc011-274b-4fb1-8311-15ffa1b33366\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.290640 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.388277 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4" path="/var/lib/kubelet/pods/5307a6d2-3f00-4ebd-8c7b-e101e24f4dd4/volumes" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.517020 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-6md9w"] Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.948621 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" event={"ID":"b75bc011-274b-4fb1-8311-15ffa1b33366","Type":"ContainerStarted","Data":"77f68c2f6a932c591cdbd19f637bc22490304e40c0ea5e316d0ddd80c425ba21"} Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.950516 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" event={"ID":"b75bc011-274b-4fb1-8311-15ffa1b33366","Type":"ContainerStarted","Data":"2222d2af5cfbae8c2cbbb82776f89d17b3250ebe67976d95e30a580990050687"} Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.951253 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:26:40 crc kubenswrapper[5130]: I1212 16:26:40.991084 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" podStartSLOduration=1.991059753 podStartE2EDuration="1.991059753s" podCreationTimestamp="2025-12-12 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:26:40.986070477 +0000 UTC m=+700.883745309" watchObservedRunningTime="2025-12-12 16:26:40.991059753 +0000 UTC m=+700.888734585" Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.838003 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85"] Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.847360 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.848634 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85"] Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.853498 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.923783 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmhw\" (UniqueName: \"kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.923898 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:42 crc kubenswrapper[5130]: I1212 16:26:42.924171 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.025488 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.025583 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.025623 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmhw\" (UniqueName: \"kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.026462 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.026476 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.048105 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmhw\" (UniqueName: \"kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.174377 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.390510 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85"] Dec 12 16:26:43 crc kubenswrapper[5130]: W1212 16:26:43.399121 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475bdfbd_4d7a_4f0b_9483_7ad3811012cf.slice/crio-1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9 WatchSource:0}: Error finding container 1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9: Status 404 returned error can't find the container with id 1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9 Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.970949 5130 generic.go:358] "Generic (PLEG): container finished" podID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerID="37c5384f9355fc87c3a88135674c5b8126187a4decca00af7ab616e77258e89f" exitCode=0 Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.971029 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" event={"ID":"475bdfbd-4d7a-4f0b-9483-7ad3811012cf","Type":"ContainerDied","Data":"37c5384f9355fc87c3a88135674c5b8126187a4decca00af7ab616e77258e89f"} Dec 12 16:26:43 crc kubenswrapper[5130]: I1212 16:26:43.973667 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" event={"ID":"475bdfbd-4d7a-4f0b-9483-7ad3811012cf","Type":"ContainerStarted","Data":"1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9"} Dec 12 16:26:45 crc kubenswrapper[5130]: I1212 16:26:45.989291 5130 generic.go:358] "Generic (PLEG): container finished" podID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerID="3d03ba7b3e25d14bb475c5bae6380bedf2183d916cf4484dee4249f2371f2b6f" exitCode=0 Dec 12 16:26:45 crc kubenswrapper[5130]: I1212 16:26:45.989438 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" event={"ID":"475bdfbd-4d7a-4f0b-9483-7ad3811012cf","Type":"ContainerDied","Data":"3d03ba7b3e25d14bb475c5bae6380bedf2183d916cf4484dee4249f2371f2b6f"} Dec 12 16:26:47 crc kubenswrapper[5130]: I1212 16:26:46.999972 5130 generic.go:358] "Generic (PLEG): container finished" podID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerID="97e5260a7b120cb70e594ee0c8ad775df78754d36a4a40f4dd19be9a5759446a" exitCode=0 Dec 12 16:26:47 crc kubenswrapper[5130]: I1212 16:26:47.000086 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" event={"ID":"475bdfbd-4d7a-4f0b-9483-7ad3811012cf","Type":"ContainerDied","Data":"97e5260a7b120cb70e594ee0c8ad775df78754d36a4a40f4dd19be9a5759446a"} Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.256115 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.313341 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pmhw\" (UniqueName: \"kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw\") pod \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.313917 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util\") pod \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.314259 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle\") pod \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\" (UID: \"475bdfbd-4d7a-4f0b-9483-7ad3811012cf\") " Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.316687 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle" (OuterVolumeSpecName: "bundle") pod "475bdfbd-4d7a-4f0b-9483-7ad3811012cf" (UID: "475bdfbd-4d7a-4f0b-9483-7ad3811012cf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.325797 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util" (OuterVolumeSpecName: "util") pod "475bdfbd-4d7a-4f0b-9483-7ad3811012cf" (UID: "475bdfbd-4d7a-4f0b-9483-7ad3811012cf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.327503 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw" (OuterVolumeSpecName: "kube-api-access-5pmhw") pod "475bdfbd-4d7a-4f0b-9483-7ad3811012cf" (UID: "475bdfbd-4d7a-4f0b-9483-7ad3811012cf"). InnerVolumeSpecName "kube-api-access-5pmhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.416397 5130 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.416441 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5pmhw\" (UniqueName: \"kubernetes.io/projected/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-kube-api-access-5pmhw\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:48 crc kubenswrapper[5130]: I1212 16:26:48.416453 5130 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/475bdfbd-4d7a-4f0b-9483-7ad3811012cf-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.014240 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" event={"ID":"475bdfbd-4d7a-4f0b-9483-7ad3811012cf","Type":"ContainerDied","Data":"1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9"} Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.014788 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee56347ed1b9c5be047fbf9b682c5d1ce70b62f833bef1400a0b70fbb9f59d9" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.014329 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105bc85" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215115 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx"] Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215834 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="extract" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215856 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="extract" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215872 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="pull" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215879 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="pull" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215913 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="util" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.215920 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="util" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.216019 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="475bdfbd-4d7a-4f0b-9483-7ad3811012cf" containerName="extract" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.252889 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx"] Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.253126 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.257013 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.331421 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58nh\" (UniqueName: \"kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.331540 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.331893 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.433862 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.433999 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.434343 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k58nh\" (UniqueName: \"kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.434647 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.434721 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.458382 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k58nh\" (UniqueName: \"kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.571813 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:49 crc kubenswrapper[5130]: I1212 16:26:49.806227 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx"] Dec 12 16:26:49 crc kubenswrapper[5130]: W1212 16:26:49.812646 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd6585e4_c189_4aaf_98f6_4081874d4336.slice/crio-a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1 WatchSource:0}: Error finding container a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1: Status 404 returned error can't find the container with id a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1 Dec 12 16:26:50 crc kubenswrapper[5130]: I1212 16:26:50.023579 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerStarted","Data":"b37d26eed8d134e0489783577d94bc1a1b0356a9584b0917abc5bcf372fce10e"} Dec 12 16:26:50 crc kubenswrapper[5130]: I1212 16:26:50.023626 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerStarted","Data":"a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1"} Dec 12 16:26:51 crc kubenswrapper[5130]: I1212 16:26:51.032870 5130 generic.go:358] "Generic (PLEG): container finished" podID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerID="b37d26eed8d134e0489783577d94bc1a1b0356a9584b0917abc5bcf372fce10e" exitCode=0 Dec 12 16:26:51 crc kubenswrapper[5130]: I1212 16:26:51.033007 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerDied","Data":"b37d26eed8d134e0489783577d94bc1a1b0356a9584b0917abc5bcf372fce10e"} Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.075746 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5"] Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.091909 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.119999 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5"] Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.194920 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrsc\" (UniqueName: \"kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.195125 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.195233 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.209156 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.226892 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.253371 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296606 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296700 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296726 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296775 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g62bp\" (UniqueName: \"kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296800 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.296819 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrsc\" (UniqueName: \"kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.297955 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.298135 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.349294 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrsc\" (UniqueName: \"kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.398141 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g62bp\" (UniqueName: \"kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.398231 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.398274 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.398857 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.399174 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.408827 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.424039 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g62bp\" (UniqueName: \"kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp\") pod \"certified-operators-8pl6d\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.546937 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:26:53 crc kubenswrapper[5130]: I1212 16:26:53.947123 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5"] Dec 12 16:26:53 crc kubenswrapper[5130]: W1212 16:26:53.951393 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d29eb0_7bf6_47c0_bd9a_c7ae45a7b728.slice/crio-2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56 WatchSource:0}: Error finding container 2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56: Status 404 returned error can't find the container with id 2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56 Dec 12 16:26:54 crc kubenswrapper[5130]: I1212 16:26:54.051262 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:26:54 crc kubenswrapper[5130]: I1212 16:26:54.062419 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerStarted","Data":"2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56"} Dec 12 16:26:54 crc kubenswrapper[5130]: I1212 16:26:54.075560 5130 generic.go:358] "Generic (PLEG): container finished" podID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerID="d46c4e6986332d897c42ab0160e9ffa1ca3db6f41ba1353e903a4eed29d31639" exitCode=0 Dec 12 16:26:54 crc kubenswrapper[5130]: I1212 16:26:54.075634 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerDied","Data":"d46c4e6986332d897c42ab0160e9ffa1ca3db6f41ba1353e903a4eed29d31639"} Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.084825 5130 generic.go:358] "Generic (PLEG): container finished" podID="3d7f1528-4228-46f7-8f31-311c3c561112" containerID="6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5" exitCode=0 Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.084948 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerDied","Data":"6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5"} Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.085012 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerStarted","Data":"261a1835ba124316e408e51da506d5cb50fed33202cf3b038da80e6df8dcbac3"} Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.086832 5130 generic.go:358] "Generic (PLEG): container finished" podID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerID="32194d0509eae817a805e067aeeecf2489c1cdbcd6f9037069be77b4eb061e58" exitCode=0 Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.086994 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerDied","Data":"32194d0509eae817a805e067aeeecf2489c1cdbcd6f9037069be77b4eb061e58"} Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.090167 5130 generic.go:358] "Generic (PLEG): container finished" podID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerID="19f27402398fa9b397c92bf399b2fc13233876cd75892bcb4104c799081b2f86" exitCode=0 Dec 12 16:26:55 crc kubenswrapper[5130]: I1212 16:26:55.090278 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerDied","Data":"19f27402398fa9b397c92bf399b2fc13233876cd75892bcb4104c799081b2f86"} Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.349794 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.442971 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k58nh\" (UniqueName: \"kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh\") pod \"fd6585e4-c189-4aaf-98f6-4081874d4336\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.443289 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle\") pod \"fd6585e4-c189-4aaf-98f6-4081874d4336\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.443322 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util\") pod \"fd6585e4-c189-4aaf-98f6-4081874d4336\" (UID: \"fd6585e4-c189-4aaf-98f6-4081874d4336\") " Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.447238 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle" (OuterVolumeSpecName: "bundle") pod "fd6585e4-c189-4aaf-98f6-4081874d4336" (UID: "fd6585e4-c189-4aaf-98f6-4081874d4336"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.452028 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util" (OuterVolumeSpecName: "util") pod "fd6585e4-c189-4aaf-98f6-4081874d4336" (UID: "fd6585e4-c189-4aaf-98f6-4081874d4336"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.452236 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh" (OuterVolumeSpecName: "kube-api-access-k58nh") pod "fd6585e4-c189-4aaf-98f6-4081874d4336" (UID: "fd6585e4-c189-4aaf-98f6-4081874d4336"). InnerVolumeSpecName "kube-api-access-k58nh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.545042 5130 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.545498 5130 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd6585e4-c189-4aaf-98f6-4081874d4336-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:56 crc kubenswrapper[5130]: I1212 16:26:56.545509 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k58nh\" (UniqueName: \"kubernetes.io/projected/fd6585e4-c189-4aaf-98f6-4081874d4336-kube-api-access-k58nh\") on node \"crc\" DevicePath \"\"" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.108930 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" event={"ID":"fd6585e4-c189-4aaf-98f6-4081874d4336","Type":"ContainerDied","Data":"a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1"} Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.108987 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b13036e403cee2132c5c7872f885a1a961158dcae306204f37b292640e09b1" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.108991 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ep8glx" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.773643 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774494 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="extract" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774511 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="extract" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774534 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="pull" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774543 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="pull" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774558 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="util" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774565 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="util" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.774655 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="fd6585e4-c189-4aaf-98f6-4081874d4336" containerName="extract" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.788742 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.799530 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.868563 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.868789 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bs2\" (UniqueName: \"kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.868821 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.970669 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bs2\" (UniqueName: \"kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.970719 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.970784 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.971633 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.971930 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:57 crc kubenswrapper[5130]: I1212 16:26:57.991571 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bs2\" (UniqueName: \"kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2\") pod \"redhat-operators-b4n58\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:58 crc kubenswrapper[5130]: I1212 16:26:58.106129 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:26:58 crc kubenswrapper[5130]: I1212 16:26:58.497302 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:26:59 crc kubenswrapper[5130]: I1212 16:26:59.132948 5130 generic.go:358] "Generic (PLEG): container finished" podID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerID="a35dd526ca4d2cdf3307d75472e2757ffbb122ab329a0106eeceb830dfe67dcd" exitCode=0 Dec 12 16:26:59 crc kubenswrapper[5130]: I1212 16:26:59.133213 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerDied","Data":"a35dd526ca4d2cdf3307d75472e2757ffbb122ab329a0106eeceb830dfe67dcd"} Dec 12 16:26:59 crc kubenswrapper[5130]: I1212 16:26:59.133720 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerStarted","Data":"fe08d83e6afea017058d5fc6f57ddccb08368d775f104e9ac99e55142871b310"} Dec 12 16:26:59 crc kubenswrapper[5130]: I1212 16:26:59.152875 5130 generic.go:358] "Generic (PLEG): container finished" podID="3d7f1528-4228-46f7-8f31-311c3c561112" containerID="08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7" exitCode=0 Dec 12 16:26:59 crc kubenswrapper[5130]: I1212 16:26:59.153006 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerDied","Data":"08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7"} Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.160376 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerStarted","Data":"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5"} Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.181820 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8pl6d" podStartSLOduration=4.110827777 podStartE2EDuration="7.1818037s" podCreationTimestamp="2025-12-12 16:26:53 +0000 UTC" firstStartedPulling="2025-12-12 16:26:55.086067551 +0000 UTC m=+714.983742383" lastFinishedPulling="2025-12-12 16:26:58.157043474 +0000 UTC m=+718.054718306" observedRunningTime="2025-12-12 16:27:00.176254299 +0000 UTC m=+720.073929141" watchObservedRunningTime="2025-12-12 16:27:00.1818037 +0000 UTC m=+720.079478532" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.484995 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wbj29"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.510109 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.511191 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wbj29"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.515667 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-xntsg\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.515732 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.515767 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.619171 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf5v\" (UniqueName: \"kubernetes.io/projected/18744739-d26e-4056-a036-656151fcc824-kube-api-access-vdf5v\") pod \"obo-prometheus-operator-86648f486b-wbj29\" (UID: \"18744739-d26e-4056-a036-656151fcc824\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.638432 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.647413 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.650676 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-snb8c\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.650981 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.654772 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.664369 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.667416 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.672155 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.720326 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdf5v\" (UniqueName: \"kubernetes.io/projected/18744739-d26e-4056-a036-656151fcc824-kube-api-access-vdf5v\") pod \"obo-prometheus-operator-86648f486b-wbj29\" (UID: \"18744739-d26e-4056-a036-656151fcc824\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.769160 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdf5v\" (UniqueName: \"kubernetes.io/projected/18744739-d26e-4056-a036-656151fcc824-kube-api-access-vdf5v\") pod \"obo-prometheus-operator-86648f486b-wbj29\" (UID: \"18744739-d26e-4056-a036-656151fcc824\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.822116 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.822209 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.822288 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.822316 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.840445 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.860966 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-qxqmn"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.878428 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-qxqmn"] Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.878587 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.890469 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.892215 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-dbxwx\"" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.932904 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.933005 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.933080 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.933118 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.948616 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.956161 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.957805 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6b5aa8b-142f-4f74-a328-f0937a20672f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g\" (UID: \"c6b5aa8b-142f-4f74-a328-f0937a20672f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.962011 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc636fbb-cf50-4a1f-82f5-81db89bb0f5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr\" (UID: \"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:00 crc kubenswrapper[5130]: I1212 16:27:00.968872 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.002461 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.035300 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-observability-operator-tls\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.035757 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bx9\" (UniqueName: \"kubernetes.io/projected/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-kube-api-access-w6bx9\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.062162 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-nqtp8"] Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.138090 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-observability-operator-tls\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.138167 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bx9\" (UniqueName: \"kubernetes.io/projected/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-kube-api-access-w6bx9\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.144375 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-observability-operator-tls\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.162923 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bx9\" (UniqueName: \"kubernetes.io/projected/9425bd1f-c734-4ec0-9e2e-80b2d5ece709-kube-api-access-w6bx9\") pod \"observability-operator-78c97476f4-qxqmn\" (UID: \"9425bd1f-c734-4ec0-9e2e-80b2d5ece709\") " pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.223300 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-nqtp8"] Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.223510 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.228899 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-q7phj\"" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.242511 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.342435 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnn6\" (UniqueName: \"kubernetes.io/projected/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-kube-api-access-qxnn6\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.342492 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.444586 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnn6\" (UniqueName: \"kubernetes.io/projected/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-kube-api-access-qxnn6\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.444802 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.445863 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.472014 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnn6\" (UniqueName: \"kubernetes.io/projected/f38bca5c-15f3-4d63-9c03-a33ec7a5f22b-kube-api-access-qxnn6\") pod \"perses-operator-68bdb49cbf-nqtp8\" (UID: \"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b\") " pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:01 crc kubenswrapper[5130]: I1212 16:27:01.542366 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:02 crc kubenswrapper[5130]: I1212 16:27:02.969150 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-6md9w" Dec 12 16:27:03 crc kubenswrapper[5130]: I1212 16:27:03.080385 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:27:03 crc kubenswrapper[5130]: I1212 16:27:03.547239 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:03 crc kubenswrapper[5130]: I1212 16:27:03.547300 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:03 crc kubenswrapper[5130]: I1212 16:27:03.602751 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:04 crc kubenswrapper[5130]: I1212 16:27:04.263099 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.178748 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6c994c654b-42tmw"] Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.244877 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6c994c654b-42tmw"] Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.245082 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.248856 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-rf5wq\"" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.249124 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.249359 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.249576 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.302377 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-webhook-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.302841 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9pn9\" (UniqueName: \"kubernetes.io/projected/1aa11df6-5c2b-4018-8146-09c5d79b9311-kube-api-access-m9pn9\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.302890 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-apiservice-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.414073 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m9pn9\" (UniqueName: \"kubernetes.io/projected/1aa11df6-5c2b-4018-8146-09c5d79b9311-kube-api-access-m9pn9\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.414150 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-apiservice-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.414241 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-webhook-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.430374 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-webhook-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.433883 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1aa11df6-5c2b-4018-8146-09c5d79b9311-apiservice-cert\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.450113 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9pn9\" (UniqueName: \"kubernetes.io/projected/1aa11df6-5c2b-4018-8146-09c5d79b9311-kube-api-access-m9pn9\") pod \"elastic-operator-6c994c654b-42tmw\" (UID: \"1aa11df6-5c2b-4018-8146-09c5d79b9311\") " pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.564499 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6c994c654b-42tmw" Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.767968 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.826418 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g"] Dec 12 16:27:05 crc kubenswrapper[5130]: I1212 16:27:05.972277 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-qxqmn"] Dec 12 16:27:06 crc kubenswrapper[5130]: W1212 16:27:06.016253 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9425bd1f_c734_4ec0_9e2e_80b2d5ece709.slice/crio-5e1483dea205ec79a1a499118d0a0e0c1adb2fd310a13a6f5a34c5ef1a4ef13c WatchSource:0}: Error finding container 5e1483dea205ec79a1a499118d0a0e0c1adb2fd310a13a6f5a34c5ef1a4ef13c: Status 404 returned error can't find the container with id 5e1483dea205ec79a1a499118d0a0e0c1adb2fd310a13a6f5a34c5ef1a4ef13c Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.216558 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerStarted","Data":"109a5417fc5b240d74e50c2027f7b1468b267070f3fedd1a18f0a7ccc33b88a4"} Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.219066 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" event={"ID":"9425bd1f-c734-4ec0-9e2e-80b2d5ece709","Type":"ContainerStarted","Data":"5e1483dea205ec79a1a499118d0a0e0c1adb2fd310a13a6f5a34c5ef1a4ef13c"} Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.220788 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerStarted","Data":"9f87e0abaf0e2dc7d04521b9992e32c1ddd13d7b0c38981d678c7d341a0c26c2"} Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.223476 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" event={"ID":"c6b5aa8b-142f-4f74-a328-f0937a20672f","Type":"ContainerStarted","Data":"215eaa28b76476200c8ab036e8eeedcc38417fa2d7f547d20c78b51d8eea327b"} Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.223696 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8pl6d" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="registry-server" containerID="cri-o://1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5" gracePeriod=2 Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.307070 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6c994c654b-42tmw"] Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.326515 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-nqtp8"] Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.405077 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr"] Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.409976 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-wbj29"] Dec 12 16:27:06 crc kubenswrapper[5130]: W1212 16:27:06.441343 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc636fbb_cf50_4a1f_82f5_81db89bb0f5b.slice/crio-4f37588aa915265bb38384189d2bebd507e3439e01b1ab8101df19b76f529e46 WatchSource:0}: Error finding container 4f37588aa915265bb38384189d2bebd507e3439e01b1ab8101df19b76f529e46: Status 404 returned error can't find the container with id 4f37588aa915265bb38384189d2bebd507e3439e01b1ab8101df19b76f529e46 Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.682632 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.743450 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g62bp\" (UniqueName: \"kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp\") pod \"3d7f1528-4228-46f7-8f31-311c3c561112\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.743593 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities\") pod \"3d7f1528-4228-46f7-8f31-311c3c561112\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.743662 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content\") pod \"3d7f1528-4228-46f7-8f31-311c3c561112\" (UID: \"3d7f1528-4228-46f7-8f31-311c3c561112\") " Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.744667 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities" (OuterVolumeSpecName: "utilities") pod "3d7f1528-4228-46f7-8f31-311c3c561112" (UID: "3d7f1528-4228-46f7-8f31-311c3c561112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.752888 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp" (OuterVolumeSpecName: "kube-api-access-g62bp") pod "3d7f1528-4228-46f7-8f31-311c3c561112" (UID: "3d7f1528-4228-46f7-8f31-311c3c561112"). InnerVolumeSpecName "kube-api-access-g62bp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.798835 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d7f1528-4228-46f7-8f31-311c3c561112" (UID: "3d7f1528-4228-46f7-8f31-311c3c561112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.845693 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.845766 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7f1528-4228-46f7-8f31-311c3c561112-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:06 crc kubenswrapper[5130]: I1212 16:27:06.845779 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g62bp\" (UniqueName: \"kubernetes.io/projected/3d7f1528-4228-46f7-8f31-311c3c561112-kube-api-access-g62bp\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.244699 5130 generic.go:358] "Generic (PLEG): container finished" podID="3d7f1528-4228-46f7-8f31-311c3c561112" containerID="1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5" exitCode=0 Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.244974 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pl6d" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.245770 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerDied","Data":"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.245855 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pl6d" event={"ID":"3d7f1528-4228-46f7-8f31-311c3c561112","Type":"ContainerDied","Data":"261a1835ba124316e408e51da506d5cb50fed33202cf3b038da80e6df8dcbac3"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.245883 5130 scope.go:117] "RemoveContainer" containerID="1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.268006 5130 generic.go:358] "Generic (PLEG): container finished" podID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerID="9f87e0abaf0e2dc7d04521b9992e32c1ddd13d7b0c38981d678c7d341a0c26c2" exitCode=0 Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.268087 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerDied","Data":"9f87e0abaf0e2dc7d04521b9992e32c1ddd13d7b0c38981d678c7d341a0c26c2"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.269853 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" event={"ID":"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b","Type":"ContainerStarted","Data":"5b61638b538d67385ff62ec556bb9836d79d18f96dbf65a8bfc5dbd83678fe29"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.279530 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" event={"ID":"18744739-d26e-4056-a036-656151fcc824","Type":"ContainerStarted","Data":"ca918709ac3de2a4eabe8d4d0736ffb5efe6c11c978e10732b7e348cea2388a2"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.288392 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.296439 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" event={"ID":"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b","Type":"ContainerStarted","Data":"4f37588aa915265bb38384189d2bebd507e3439e01b1ab8101df19b76f529e46"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.297986 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8pl6d"] Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.318614 5130 scope.go:117] "RemoveContainer" containerID="08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.321510 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6c994c654b-42tmw" event={"ID":"1aa11df6-5c2b-4018-8146-09c5d79b9311","Type":"ContainerStarted","Data":"61fdd494af0ccbc8ea45046db69f7485cdf193e6f532aeb32f18f8c12c5fe3e4"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.329519 5130 generic.go:358] "Generic (PLEG): container finished" podID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerID="109a5417fc5b240d74e50c2027f7b1468b267070f3fedd1a18f0a7ccc33b88a4" exitCode=0 Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.329632 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerDied","Data":"109a5417fc5b240d74e50c2027f7b1468b267070f3fedd1a18f0a7ccc33b88a4"} Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.389164 5130 scope.go:117] "RemoveContainer" containerID="6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.480368 5130 scope.go:117] "RemoveContainer" containerID="1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5" Dec 12 16:27:07 crc kubenswrapper[5130]: E1212 16:27:07.490388 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5\": container with ID starting with 1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5 not found: ID does not exist" containerID="1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.490456 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5"} err="failed to get container status \"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5\": rpc error: code = NotFound desc = could not find container \"1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5\": container with ID starting with 1ee71509fa042d99f69e0b7c52663a4247312f22aa3d8b2cfa30df09b65de2c5 not found: ID does not exist" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.490488 5130 scope.go:117] "RemoveContainer" containerID="08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7" Dec 12 16:27:07 crc kubenswrapper[5130]: E1212 16:27:07.494392 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7\": container with ID starting with 08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7 not found: ID does not exist" containerID="08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.494475 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7"} err="failed to get container status \"08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7\": rpc error: code = NotFound desc = could not find container \"08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7\": container with ID starting with 08209b64ca43e25d869db907cb9f054a2b2af0cda36acb47426859e1d0f04bc7 not found: ID does not exist" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.494508 5130 scope.go:117] "RemoveContainer" containerID="6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5" Dec 12 16:27:07 crc kubenswrapper[5130]: E1212 16:27:07.496759 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5\": container with ID starting with 6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5 not found: ID does not exist" containerID="6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5" Dec 12 16:27:07 crc kubenswrapper[5130]: I1212 16:27:07.496828 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5"} err="failed to get container status \"6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5\": rpc error: code = NotFound desc = could not find container \"6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5\": container with ID starting with 6a312ffd5fb023e0a5a62b5f75bb6119b5908d5ea940c310a94ec75225f08ee5 not found: ID does not exist" Dec 12 16:27:08 crc kubenswrapper[5130]: I1212 16:27:08.349421 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerStarted","Data":"cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7"} Dec 12 16:27:08 crc kubenswrapper[5130]: I1212 16:27:08.370880 5130 generic.go:358] "Generic (PLEG): container finished" podID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerID="6097bc4ff0cc7af33fe29f6b41a84ae724064e23a682b72a7685174302ca5603" exitCode=0 Dec 12 16:27:08 crc kubenswrapper[5130]: I1212 16:27:08.382387 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" path="/var/lib/kubelet/pods/3d7f1528-4228-46f7-8f31-311c3c561112/volumes" Dec 12 16:27:08 crc kubenswrapper[5130]: I1212 16:27:08.383204 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerDied","Data":"6097bc4ff0cc7af33fe29f6b41a84ae724064e23a682b72a7685174302ca5603"} Dec 12 16:27:08 crc kubenswrapper[5130]: I1212 16:27:08.386026 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4n58" podStartSLOduration=5.217136632 podStartE2EDuration="11.385992837s" podCreationTimestamp="2025-12-12 16:26:57 +0000 UTC" firstStartedPulling="2025-12-12 16:26:59.13434989 +0000 UTC m=+719.032024722" lastFinishedPulling="2025-12-12 16:27:05.303206095 +0000 UTC m=+725.200880927" observedRunningTime="2025-12-12 16:27:08.383624387 +0000 UTC m=+728.281299229" watchObservedRunningTime="2025-12-12 16:27:08.385992837 +0000 UTC m=+728.283667669" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.759800 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.812509 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrsc\" (UniqueName: \"kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc\") pod \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.812717 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle\") pod \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.814800 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle" (OuterVolumeSpecName: "bundle") pod "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" (UID: "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.814946 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util\") pod \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\" (UID: \"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728\") " Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.815528 5130 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-bundle\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.837514 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc" (OuterVolumeSpecName: "kube-api-access-lzrsc") pod "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" (UID: "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728"). InnerVolumeSpecName "kube-api-access-lzrsc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.840341 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util" (OuterVolumeSpecName: "util") pod "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" (UID: "86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.917291 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzrsc\" (UniqueName: \"kubernetes.io/projected/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-kube-api-access-lzrsc\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:09 crc kubenswrapper[5130]: I1212 16:27:09.917333 5130 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728-util\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:10 crc kubenswrapper[5130]: I1212 16:27:10.432796 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" event={"ID":"86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728","Type":"ContainerDied","Data":"2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56"} Dec 12 16:27:10 crc kubenswrapper[5130]: I1212 16:27:10.432854 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aplxm5" Dec 12 16:27:10 crc kubenswrapper[5130]: I1212 16:27:10.432866 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8b42efd3171feed15bbc44b54f6ac59003e21923d0589f40a1a944cfdccf56" Dec 12 16:27:18 crc kubenswrapper[5130]: I1212 16:27:18.107465 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:18 crc kubenswrapper[5130]: I1212 16:27:18.107782 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:18 crc kubenswrapper[5130]: I1212 16:27:18.170457 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:18 crc kubenswrapper[5130]: I1212 16:27:18.559945 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:21 crc kubenswrapper[5130]: I1212 16:27:21.960941 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:27:21 crc kubenswrapper[5130]: I1212 16:27:21.961808 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4n58" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="registry-server" containerID="cri-o://cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" gracePeriod=2 Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.759757 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt"] Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760436 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="pull" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760456 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="pull" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760475 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="util" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760481 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="util" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760491 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="extract" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760498 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="extract" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760507 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="extract-utilities" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760512 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="extract-utilities" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760523 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760528 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760539 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="extract-content" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760544 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="extract-content" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760643 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d7f1528-4228-46f7-8f31-311c3c561112" containerName="registry-server" Dec 12 16:27:22 crc kubenswrapper[5130]: I1212 16:27:22.760656 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="86d29eb0-7bf6-47c0-bd9a-c7ae45a7b728" containerName="extract" Dec 12 16:27:23 crc kubenswrapper[5130]: I1212 16:27:23.537031 5130 generic.go:358] "Generic (PLEG): container finished" podID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerID="cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" exitCode=0 Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.479676 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerDied","Data":"cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7"} Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.479972 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.480904 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt"] Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.485812 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-72tmp\"" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.485931 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.486034 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.594215 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtfd\" (UniqueName: \"kubernetes.io/projected/50e025ff-2065-4156-844d-68d8587d7b6c-kube-api-access-dxtfd\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.594289 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50e025ff-2065-4156-844d-68d8587d7b6c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.696298 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtfd\" (UniqueName: \"kubernetes.io/projected/50e025ff-2065-4156-844d-68d8587d7b6c-kube-api-access-dxtfd\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.696376 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50e025ff-2065-4156-844d-68d8587d7b6c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.696835 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50e025ff-2065-4156-844d-68d8587d7b6c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.728903 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtfd\" (UniqueName: \"kubernetes.io/projected/50e025ff-2065-4156-844d-68d8587d7b6c-kube-api-access-dxtfd\") pod \"cert-manager-operator-controller-manager-64c74584c4-djdmt\" (UID: \"50e025ff-2065-4156-844d-68d8587d7b6c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:24 crc kubenswrapper[5130]: I1212 16:27:24.812825 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.367291 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.782784 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.783654 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.922894 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vzv\" (UniqueName: \"kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.923088 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:25 crc kubenswrapper[5130]: I1212 16:27:25.923290 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.027439 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57vzv\" (UniqueName: \"kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.027632 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.027688 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.028167 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.028484 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.060447 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vzv\" (UniqueName: \"kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv\") pod \"community-operators-9wq8j\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:26 crc kubenswrapper[5130]: I1212 16:27:26.104789 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:28 crc kubenswrapper[5130]: I1212 16:27:28.137985 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" podUID="162da780-4bd3-4acf-b114-06ae104fc8ad" containerName="registry" containerID="cri-o://a39c80875bc5a6660406644e4cb5ad2ca4830e3788cd5f6a1d14fba813a1e0fc" gracePeriod=30 Dec 12 16:27:28 crc kubenswrapper[5130]: E1212 16:27:28.504042 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7 is running failed: container process not found" containerID="cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:28 crc kubenswrapper[5130]: E1212 16:27:28.505634 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7 is running failed: container process not found" containerID="cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:28 crc kubenswrapper[5130]: E1212 16:27:28.506734 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7 is running failed: container process not found" containerID="cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:28 crc kubenswrapper[5130]: E1212 16:27:28.506888 5130 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b4n58" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="registry-server" probeResult="unknown" Dec 12 16:27:28 crc kubenswrapper[5130]: I1212 16:27:28.575389 5130 generic.go:358] "Generic (PLEG): container finished" podID="162da780-4bd3-4acf-b114-06ae104fc8ad" containerID="a39c80875bc5a6660406644e4cb5ad2ca4830e3788cd5f6a1d14fba813a1e0fc" exitCode=0 Dec 12 16:27:28 crc kubenswrapper[5130]: I1212 16:27:28.575486 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" event={"ID":"162da780-4bd3-4acf-b114-06ae104fc8ad","Type":"ContainerDied","Data":"a39c80875bc5a6660406644e4cb5ad2ca4830e3788cd5f6a1d14fba813a1e0fc"} Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.587924 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4n58" event={"ID":"5f56514c-f6b2-4f15-8a4a-615ab5442708","Type":"ContainerDied","Data":"fe08d83e6afea017058d5fc6f57ddccb08368d775f104e9ac99e55142871b310"} Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.587984 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe08d83e6afea017058d5fc6f57ddccb08368d775f104e9ac99e55142871b310" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.654430 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.799380 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9bs2\" (UniqueName: \"kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2\") pod \"5f56514c-f6b2-4f15-8a4a-615ab5442708\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.799854 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities\") pod \"5f56514c-f6b2-4f15-8a4a-615ab5442708\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.800060 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content\") pod \"5f56514c-f6b2-4f15-8a4a-615ab5442708\" (UID: \"5f56514c-f6b2-4f15-8a4a-615ab5442708\") " Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.802526 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities" (OuterVolumeSpecName: "utilities") pod "5f56514c-f6b2-4f15-8a4a-615ab5442708" (UID: "5f56514c-f6b2-4f15-8a4a-615ab5442708"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.808587 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2" (OuterVolumeSpecName: "kube-api-access-n9bs2") pod "5f56514c-f6b2-4f15-8a4a-615ab5442708" (UID: "5f56514c-f6b2-4f15-8a4a-615ab5442708"). InnerVolumeSpecName "kube-api-access-n9bs2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.859242 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.893851 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt"] Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.903383 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n9bs2\" (UniqueName: \"kubernetes.io/projected/5f56514c-f6b2-4f15-8a4a-615ab5442708-kube-api-access-n9bs2\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.903427 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.928164 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:29 crc kubenswrapper[5130]: I1212 16:27:29.946646 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f56514c-f6b2-4f15-8a4a-615ab5442708" (UID: "5f56514c-f6b2-4f15-8a4a-615ab5442708"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.004825 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.004887 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.005024 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.005092 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.005164 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.005255 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8889\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.005350 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.006205 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls\") pod \"162da780-4bd3-4acf-b114-06ae104fc8ad\" (UID: \"162da780-4bd3-4acf-b114-06ae104fc8ad\") " Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.006523 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f56514c-f6b2-4f15-8a4a-615ab5442708-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.006616 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.006744 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.019827 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.020146 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.020903 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889" (OuterVolumeSpecName: "kube-api-access-q8889") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "kube-api-access-q8889". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.024365 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.026359 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.028400 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "162da780-4bd3-4acf-b114-06ae104fc8ad" (UID: "162da780-4bd3-4acf-b114-06ae104fc8ad"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107877 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q8889\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-kube-api-access-q8889\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107911 5130 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107923 5130 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107936 5130 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/162da780-4bd3-4acf-b114-06ae104fc8ad-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107947 5130 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/162da780-4bd3-4acf-b114-06ae104fc8ad-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107960 5130 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.107971 5130 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/162da780-4bd3-4acf-b114-06ae104fc8ad-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.596358 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" event={"ID":"c6b5aa8b-142f-4f74-a328-f0937a20672f","Type":"ContainerStarted","Data":"38913e8db857b861bc8b98314e6f387b1fe2b559b6160546a5e2579aa07c67ce"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.604274 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.604387 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jqtjf" event={"ID":"162da780-4bd3-4acf-b114-06ae104fc8ad","Type":"ContainerDied","Data":"4d802f5dbe85c769c5b4afa6aaa710f145332a5713a213a44b0344adeeb96222"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.604890 5130 scope.go:117] "RemoveContainer" containerID="a39c80875bc5a6660406644e4cb5ad2ca4830e3788cd5f6a1d14fba813a1e0fc" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.606700 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" event={"ID":"f38bca5c-15f3-4d63-9c03-a33ec7a5f22b","Type":"ContainerStarted","Data":"75c2c69445921fb73efb3a495042010e0223af4886d69f7ac77e280d4f1dc55a"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.607461 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.615199 5130 generic.go:358] "Generic (PLEG): container finished" podID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerID="015fd289c9cb928130635ba046b94f394cfb69aa9041a8ab2353637d71ea07b2" exitCode=0 Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.615298 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerDied","Data":"015fd289c9cb928130635ba046b94f394cfb69aa9041a8ab2353637d71ea07b2"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.615367 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerStarted","Data":"bbad89595bffa7c2b78f2f4506d008724735866d5bdc5fb821bbce670a2547db"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.627171 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" event={"ID":"18744739-d26e-4056-a036-656151fcc824","Type":"ContainerStarted","Data":"4f6058d93dd7533e82a6236bd6254532b6fe95d6480e198231a641c746f8f044"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.628576 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-52l2g" podStartSLOduration=6.903134756 podStartE2EDuration="30.628553273s" podCreationTimestamp="2025-12-12 16:27:00 +0000 UTC" firstStartedPulling="2025-12-12 16:27:05.845315565 +0000 UTC m=+725.742990397" lastFinishedPulling="2025-12-12 16:27:29.570734082 +0000 UTC m=+749.468408914" observedRunningTime="2025-12-12 16:27:30.625610238 +0000 UTC m=+750.523285070" watchObservedRunningTime="2025-12-12 16:27:30.628553273 +0000 UTC m=+750.526228105" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.634159 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" event={"ID":"bc636fbb-cf50-4a1f-82f5-81db89bb0f5b","Type":"ContainerStarted","Data":"e6c91114c9feeb66f81eb5b7e1db59665d1a32728b866db9b7900f7f448310e3"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.639478 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6c994c654b-42tmw" event={"ID":"1aa11df6-5c2b-4018-8146-09c5d79b9311","Type":"ContainerStarted","Data":"f7468ca2597b25367291c527f32ed32cc5934aaea968905e6debd499cadf6d71"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.641932 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" event={"ID":"50e025ff-2065-4156-844d-68d8587d7b6c","Type":"ContainerStarted","Data":"66b94b25dc46df9365e11bb4d6f85afcf13fc946133634b28e06a6aeafa41bd9"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.650050 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4n58" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.650204 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" event={"ID":"9425bd1f-c734-4ec0-9e2e-80b2d5ece709","Type":"ContainerStarted","Data":"a5b7cfa5bd15ddbab240a46ad67d76ae4a68637d534ed5070d0c8b4caf9dfb8d"} Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.652915 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.668136 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jqtjf"] Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.681123 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" podStartSLOduration=6.413519304 podStartE2EDuration="29.681098073s" podCreationTimestamp="2025-12-12 16:27:01 +0000 UTC" firstStartedPulling="2025-12-12 16:27:06.344002486 +0000 UTC m=+726.241677318" lastFinishedPulling="2025-12-12 16:27:29.611581265 +0000 UTC m=+749.509256087" observedRunningTime="2025-12-12 16:27:30.675891741 +0000 UTC m=+750.573566583" watchObservedRunningTime="2025-12-12 16:27:30.681098073 +0000 UTC m=+750.578772905" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.719411 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-wbj29" podStartSLOduration=7.5944730830000005 podStartE2EDuration="30.719373221s" podCreationTimestamp="2025-12-12 16:27:00 +0000 UTC" firstStartedPulling="2025-12-12 16:27:06.454222006 +0000 UTC m=+726.351896838" lastFinishedPulling="2025-12-12 16:27:29.579122144 +0000 UTC m=+749.476796976" observedRunningTime="2025-12-12 16:27:30.718900369 +0000 UTC m=+750.616575201" watchObservedRunningTime="2025-12-12 16:27:30.719373221 +0000 UTC m=+750.617048063" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.751612 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6c994c654b-42tmw" podStartSLOduration=2.54144418 podStartE2EDuration="25.751579816s" podCreationTimestamp="2025-12-12 16:27:05 +0000 UTC" firstStartedPulling="2025-12-12 16:27:06.310930379 +0000 UTC m=+726.208605211" lastFinishedPulling="2025-12-12 16:27:29.521066015 +0000 UTC m=+749.418740847" observedRunningTime="2025-12-12 16:27:30.748533139 +0000 UTC m=+750.646207981" watchObservedRunningTime="2025-12-12 16:27:30.751579816 +0000 UTC m=+750.649254648" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.779303 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cd6b88c95-7vbzr" podStartSLOduration=7.623763895 podStartE2EDuration="30.779285498s" podCreationTimestamp="2025-12-12 16:27:00 +0000 UTC" firstStartedPulling="2025-12-12 16:27:06.455260762 +0000 UTC m=+726.352935594" lastFinishedPulling="2025-12-12 16:27:29.610782365 +0000 UTC m=+749.508457197" observedRunningTime="2025-12-12 16:27:30.778766304 +0000 UTC m=+750.676441146" watchObservedRunningTime="2025-12-12 16:27:30.779285498 +0000 UTC m=+750.676960330" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.817123 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" podStartSLOduration=7.288227092 podStartE2EDuration="30.817087924s" podCreationTimestamp="2025-12-12 16:27:00 +0000 UTC" firstStartedPulling="2025-12-12 16:27:06.022637303 +0000 UTC m=+725.920312135" lastFinishedPulling="2025-12-12 16:27:29.551498135 +0000 UTC m=+749.449172967" observedRunningTime="2025-12-12 16:27:30.812814196 +0000 UTC m=+750.710489028" watchObservedRunningTime="2025-12-12 16:27:30.817087924 +0000 UTC m=+750.714762756" Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.877530 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:27:30 crc kubenswrapper[5130]: I1212 16:27:30.881689 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4n58"] Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.362791 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363906 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="registry-server" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363926 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="registry-server" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363942 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="extract-content" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363949 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="extract-content" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363961 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="162da780-4bd3-4acf-b114-06ae104fc8ad" containerName="registry" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363967 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="162da780-4bd3-4acf-b114-06ae104fc8ad" containerName="registry" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363979 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="extract-utilities" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.363985 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="extract-utilities" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.364111 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" containerName="registry-server" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.364128 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="162da780-4bd3-4acf-b114-06ae104fc8ad" containerName="registry" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.376338 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.380403 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.381822 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.382139 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-8qddz\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.382271 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.385689 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.386352 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.386524 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.386550 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.386803 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.391628 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.456927 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457003 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457040 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457071 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457104 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457133 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457216 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457240 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457259 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457309 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457335 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457378 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457417 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457440 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.457479 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558757 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558837 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558869 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558922 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558960 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.558996 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559024 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559050 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559077 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559117 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559144 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559192 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559226 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559259 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.559290 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.561352 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.563113 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.563699 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.567436 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.567597 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.567604 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.569290 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.569742 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.569839 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.570345 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.570869 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.571803 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.573827 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.574612 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.575157 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/8b73b1a4-74b4-4b36-9c02-328f2cc9b99a-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.678073 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerStarted","Data":"8a88fc0f5d965c397f01807c816f99ba3cacb95c8250551e1b144e06e28a7bb9"} Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.679486 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.680136 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-qxqmn" Dec 12 16:27:31 crc kubenswrapper[5130]: I1212 16:27:31.701706 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.066737 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:27:32 crc kubenswrapper[5130]: W1212 16:27:32.077405 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b73b1a4_74b4_4b36_9c02_328f2cc9b99a.slice/crio-4da9688ca213ab9e5cdb3054bb7b1306f907092097bc718be5cba29b736caff3 WatchSource:0}: Error finding container 4da9688ca213ab9e5cdb3054bb7b1306f907092097bc718be5cba29b736caff3: Status 404 returned error can't find the container with id 4da9688ca213ab9e5cdb3054bb7b1306f907092097bc718be5cba29b736caff3 Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.382269 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="162da780-4bd3-4acf-b114-06ae104fc8ad" path="/var/lib/kubelet/pods/162da780-4bd3-4acf-b114-06ae104fc8ad/volumes" Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.383357 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f56514c-f6b2-4f15-8a4a-615ab5442708" path="/var/lib/kubelet/pods/5f56514c-f6b2-4f15-8a4a-615ab5442708/volumes" Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.689091 5130 generic.go:358] "Generic (PLEG): container finished" podID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerID="8a88fc0f5d965c397f01807c816f99ba3cacb95c8250551e1b144e06e28a7bb9" exitCode=0 Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.689190 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerDied","Data":"8a88fc0f5d965c397f01807c816f99ba3cacb95c8250551e1b144e06e28a7bb9"} Dec 12 16:27:32 crc kubenswrapper[5130]: I1212 16:27:32.691314 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a","Type":"ContainerStarted","Data":"4da9688ca213ab9e5cdb3054bb7b1306f907092097bc718be5cba29b736caff3"} Dec 12 16:27:33 crc kubenswrapper[5130]: I1212 16:27:33.705349 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerStarted","Data":"85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6"} Dec 12 16:27:33 crc kubenswrapper[5130]: I1212 16:27:33.729362 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9wq8j" podStartSLOduration=8.149819582 podStartE2EDuration="8.729339519s" podCreationTimestamp="2025-12-12 16:27:25 +0000 UTC" firstStartedPulling="2025-12-12 16:27:30.616324703 +0000 UTC m=+750.513999535" lastFinishedPulling="2025-12-12 16:27:31.19584464 +0000 UTC m=+751.093519472" observedRunningTime="2025-12-12 16:27:33.727457251 +0000 UTC m=+753.625132103" watchObservedRunningTime="2025-12-12 16:27:33.729339519 +0000 UTC m=+753.627014351" Dec 12 16:27:35 crc kubenswrapper[5130]: I1212 16:27:35.740320 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" event={"ID":"50e025ff-2065-4156-844d-68d8587d7b6c","Type":"ContainerStarted","Data":"7eeb427bbef68d34b6c018c3b74ac832848b9c096bf2e737e77a93a810f9ed44"} Dec 12 16:27:35 crc kubenswrapper[5130]: I1212 16:27:35.775630 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-djdmt" podStartSLOduration=8.974106079 podStartE2EDuration="13.775607248s" podCreationTimestamp="2025-12-12 16:27:22 +0000 UTC" firstStartedPulling="2025-12-12 16:27:29.920804421 +0000 UTC m=+749.818479253" lastFinishedPulling="2025-12-12 16:27:34.7223056 +0000 UTC m=+754.619980422" observedRunningTime="2025-12-12 16:27:35.773531755 +0000 UTC m=+755.671206587" watchObservedRunningTime="2025-12-12 16:27:35.775607248 +0000 UTC m=+755.673282080" Dec 12 16:27:36 crc kubenswrapper[5130]: I1212 16:27:36.105803 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:36 crc kubenswrapper[5130]: I1212 16:27:36.106281 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:36 crc kubenswrapper[5130]: I1212 16:27:36.181122 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:37 crc kubenswrapper[5130]: I1212 16:27:37.799658 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:37 crc kubenswrapper[5130]: I1212 16:27:37.999686 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt"] Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.014623 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.016694 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt"] Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.019303 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-2tblb\"" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.019458 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.024287 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.101874 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvsd\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-kube-api-access-wgvsd\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.102151 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.204781 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvsd\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-kube-api-access-wgvsd\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.204863 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.225909 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.227244 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvsd\" (UniqueName: \"kubernetes.io/projected/c184b148-4467-4bd5-8204-6369360370ee-kube-api-access-wgvsd\") pod \"cert-manager-webhook-7894b5b9b4-2kmrt\" (UID: \"c184b148-4467-4bd5-8204-6369360370ee\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.356896 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:27:38 crc kubenswrapper[5130]: I1212 16:27:38.363415 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:39 crc kubenswrapper[5130]: I1212 16:27:39.773198 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9wq8j" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="registry-server" containerID="cri-o://85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" gracePeriod=2 Dec 12 16:27:41 crc kubenswrapper[5130]: I1212 16:27:41.574606 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl"] Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.463040 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl"] Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.463384 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.466962 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-bg7l4\"" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.477420 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-nqtp8" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.600650 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jcf\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-kube-api-access-25jcf\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.601533 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.702905 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25jcf\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-kube-api-access-25jcf\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.703494 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.730417 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jcf\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-kube-api-access-25jcf\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.738711 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f3690b6-63d7-48cc-9508-e016e3476a99-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-lv2hl\" (UID: \"7f3690b6-63d7-48cc-9508-e016e3476a99\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.823687 5130 generic.go:358] "Generic (PLEG): container finished" podID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerID="85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" exitCode=0 Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.823846 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerDied","Data":"85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6"} Dec 12 16:27:43 crc kubenswrapper[5130]: I1212 16:27:43.833841 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" Dec 12 16:27:47 crc kubenswrapper[5130]: E1212 16:27:47.754905 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6 is running failed: container process not found" containerID="85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:47 crc kubenswrapper[5130]: E1212 16:27:47.756712 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6 is running failed: container process not found" containerID="85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:47 crc kubenswrapper[5130]: E1212 16:27:47.757347 5130 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6 is running failed: container process not found" containerID="85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" cmd=["grpc_health_probe","-addr=:50051"] Dec 12 16:27:47 crc kubenswrapper[5130]: E1212 16:27:47.757408 5130 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9wq8j" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="registry-server" probeResult="unknown" Dec 12 16:27:51 crc kubenswrapper[5130]: I1212 16:27:51.443234 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-r7f8q"] Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.768622 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.771800 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-spvtv\"" Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.787140 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-r7f8q"] Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.887718 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7m9\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-kube-api-access-9l7m9\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.887766 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-bound-sa-token\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.989210 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-bound-sa-token\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:53 crc kubenswrapper[5130]: I1212 16:27:53.989588 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9l7m9\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-kube-api-access-9l7m9\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:54 crc kubenswrapper[5130]: I1212 16:27:54.011029 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-bound-sa-token\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:54 crc kubenswrapper[5130]: I1212 16:27:54.013931 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l7m9\" (UniqueName: \"kubernetes.io/projected/7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc-kube-api-access-9l7m9\") pod \"cert-manager-858d87f86b-r7f8q\" (UID: \"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc\") " pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:54 crc kubenswrapper[5130]: I1212 16:27:54.098677 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-r7f8q" Dec 12 16:27:55 crc kubenswrapper[5130]: I1212 16:27:55.995352 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.120016 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content\") pod \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.120097 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57vzv\" (UniqueName: \"kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv\") pod \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.120132 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities\") pod \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\" (UID: \"098dcbcc-c98d-4de4-9c46-f40973d5ca17\") " Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.121295 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities" (OuterVolumeSpecName: "utilities") pod "098dcbcc-c98d-4de4-9c46-f40973d5ca17" (UID: "098dcbcc-c98d-4de4-9c46-f40973d5ca17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.127888 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv" (OuterVolumeSpecName: "kube-api-access-57vzv") pod "098dcbcc-c98d-4de4-9c46-f40973d5ca17" (UID: "098dcbcc-c98d-4de4-9c46-f40973d5ca17"). InnerVolumeSpecName "kube-api-access-57vzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.172760 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "098dcbcc-c98d-4de4-9c46-f40973d5ca17" (UID: "098dcbcc-c98d-4de4-9c46-f40973d5ca17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.222792 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.222869 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57vzv\" (UniqueName: \"kubernetes.io/projected/098dcbcc-c98d-4de4-9c46-f40973d5ca17-kube-api-access-57vzv\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.222896 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098dcbcc-c98d-4de4-9c46-f40973d5ca17-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.936557 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wq8j" event={"ID":"098dcbcc-c98d-4de4-9c46-f40973d5ca17","Type":"ContainerDied","Data":"bbad89595bffa7c2b78f2f4506d008724735866d5bdc5fb821bbce670a2547db"} Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.936568 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wq8j" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.936837 5130 scope.go:117] "RemoveContainer" containerID="85f8b2736e96d20053a745fa7816d83d74ceebcaa0ff2e83227c3744759c71b6" Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.964449 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:56 crc kubenswrapper[5130]: I1212 16:27:56.971458 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9wq8j"] Dec 12 16:27:58 crc kubenswrapper[5130]: I1212 16:27:58.385275 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" path="/var/lib/kubelet/pods/098dcbcc-c98d-4de4-9c46-f40973d5ca17/volumes" Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.078792 5130 scope.go:117] "RemoveContainer" containerID="8a88fc0f5d965c397f01807c816f99ba3cacb95c8250551e1b144e06e28a7bb9" Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.361778 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-r7f8q"] Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.384576 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl"] Dec 12 16:28:00 crc kubenswrapper[5130]: W1212 16:28:00.470920 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3ac2d2_e3da_4934_b2d6_6e7b3be9afdc.slice/crio-4a7fc81ec19f101129a063dcd3ab5aa956613d70f452b2fe42d9679caa769ca9 WatchSource:0}: Error finding container 4a7fc81ec19f101129a063dcd3ab5aa956613d70f452b2fe42d9679caa769ca9: Status 404 returned error can't find the container with id 4a7fc81ec19f101129a063dcd3ab5aa956613d70f452b2fe42d9679caa769ca9 Dec 12 16:28:00 crc kubenswrapper[5130]: W1212 16:28:00.548219 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3690b6_63d7_48cc_9508_e016e3476a99.slice/crio-ce8620fb1a8db50e75c63b530a6c65737c31aa69dd54e1d85c343c451b3e8abc WatchSource:0}: Error finding container ce8620fb1a8db50e75c63b530a6c65737c31aa69dd54e1d85c343c451b3e8abc: Status 404 returned error can't find the container with id ce8620fb1a8db50e75c63b530a6c65737c31aa69dd54e1d85c343c451b3e8abc Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.554082 5130 scope.go:117] "RemoveContainer" containerID="015fd289c9cb928130635ba046b94f394cfb69aa9041a8ab2353637d71ea07b2" Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.676995 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt"] Dec 12 16:28:00 crc kubenswrapper[5130]: W1212 16:28:00.700650 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc184b148_4467_4bd5_8204_6369360370ee.slice/crio-57f73a8550406dbbe8a60e3f5cb132d520c975894e5462ee359436abd49a1bbd WatchSource:0}: Error finding container 57f73a8550406dbbe8a60e3f5cb132d520c975894e5462ee359436abd49a1bbd: Status 404 returned error can't find the container with id 57f73a8550406dbbe8a60e3f5cb132d520c975894e5462ee359436abd49a1bbd Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.967757 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" event={"ID":"7f3690b6-63d7-48cc-9508-e016e3476a99","Type":"ContainerStarted","Data":"ce8620fb1a8db50e75c63b530a6c65737c31aa69dd54e1d85c343c451b3e8abc"} Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.968988 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-r7f8q" event={"ID":"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc","Type":"ContainerStarted","Data":"4a7fc81ec19f101129a063dcd3ab5aa956613d70f452b2fe42d9679caa769ca9"} Dec 12 16:28:00 crc kubenswrapper[5130]: I1212 16:28:00.971639 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" event={"ID":"c184b148-4467-4bd5-8204-6369360370ee","Type":"ContainerStarted","Data":"57f73a8550406dbbe8a60e3f5cb132d520c975894e5462ee359436abd49a1bbd"} Dec 12 16:28:02 crc kubenswrapper[5130]: I1212 16:28:02.173280 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a","Type":"ContainerStarted","Data":"d17ddd56ec1e69b963f06df02f54c22e27d986e75164a0c8e2bba0d7b48270bf"} Dec 12 16:28:02 crc kubenswrapper[5130]: I1212 16:28:02.398930 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:02 crc kubenswrapper[5130]: I1212 16:28:02.433886 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 12 16:28:04 crc kubenswrapper[5130]: I1212 16:28:04.189778 5130 generic.go:358] "Generic (PLEG): container finished" podID="8b73b1a4-74b4-4b36-9c02-328f2cc9b99a" containerID="d17ddd56ec1e69b963f06df02f54c22e27d986e75164a0c8e2bba0d7b48270bf" exitCode=0 Dec 12 16:28:04 crc kubenswrapper[5130]: I1212 16:28:04.189866 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a","Type":"ContainerDied","Data":"d17ddd56ec1e69b963f06df02f54c22e27d986e75164a0c8e2bba0d7b48270bf"} Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.231231 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" event={"ID":"c184b148-4467-4bd5-8204-6369360370ee","Type":"ContainerStarted","Data":"d03cff198514120f2f92cb5ebedb67b99718e9df881eaf9fb581e713642ea437"} Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.231820 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.240492 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" event={"ID":"7f3690b6-63d7-48cc-9508-e016e3476a99","Type":"ContainerStarted","Data":"14fe3d10726ea4ec48aa5041a7982fadca49f72c26ed93de20731acda724c90f"} Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.247934 5130 generic.go:358] "Generic (PLEG): container finished" podID="8b73b1a4-74b4-4b36-9c02-328f2cc9b99a" containerID="02c6ca623bfbf1e086ed2b54c19e328c12a5ce746f2524c5bfadf77c7b7e7621" exitCode=0 Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.248024 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a","Type":"ContainerDied","Data":"02c6ca623bfbf1e086ed2b54c19e328c12a5ce746f2524c5bfadf77c7b7e7621"} Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.251871 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-r7f8q" event={"ID":"7b3ac2d2-e3da-4934-b2d6-6e7b3be9afdc","Type":"ContainerStarted","Data":"fe3792222c03bd8e256699eb28b4bde048120a24e3b53a7d4d4c5499efd4a7db"} Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.277053 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-r7f8q" podStartSLOduration=9.74453611 podStartE2EDuration="17.277036417s" podCreationTimestamp="2025-12-12 16:27:51 +0000 UTC" firstStartedPulling="2025-12-12 16:28:00.474091906 +0000 UTC m=+780.371766738" lastFinishedPulling="2025-12-12 16:28:08.006592213 +0000 UTC m=+787.904267045" observedRunningTime="2025-12-12 16:28:08.275829077 +0000 UTC m=+788.173503909" watchObservedRunningTime="2025-12-12 16:28:08.277036417 +0000 UTC m=+788.174711249" Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.278439 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" podStartSLOduration=23.983589621 podStartE2EDuration="31.278428583s" podCreationTimestamp="2025-12-12 16:27:37 +0000 UTC" firstStartedPulling="2025-12-12 16:28:00.707445272 +0000 UTC m=+780.605120104" lastFinishedPulling="2025-12-12 16:28:08.002284224 +0000 UTC m=+787.899959066" observedRunningTime="2025-12-12 16:28:08.25657207 +0000 UTC m=+788.154246912" watchObservedRunningTime="2025-12-12 16:28:08.278428583 +0000 UTC m=+788.176103415" Dec 12 16:28:08 crc kubenswrapper[5130]: I1212 16:28:08.397896 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-lv2hl" podStartSLOduration=19.989798778 podStartE2EDuration="27.397872586s" podCreationTimestamp="2025-12-12 16:27:41 +0000 UTC" firstStartedPulling="2025-12-12 16:28:00.554658445 +0000 UTC m=+780.452333277" lastFinishedPulling="2025-12-12 16:28:07.962732263 +0000 UTC m=+787.860407085" observedRunningTime="2025-12-12 16:28:08.397670281 +0000 UTC m=+788.295345113" watchObservedRunningTime="2025-12-12 16:28:08.397872586 +0000 UTC m=+788.295547418" Dec 12 16:28:09 crc kubenswrapper[5130]: I1212 16:28:09.261323 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"8b73b1a4-74b4-4b36-9c02-328f2cc9b99a","Type":"ContainerStarted","Data":"6c7c2654787452f3eb41e84798babe6a4e11219f951dec0692a48118bd4af169"} Dec 12 16:28:09 crc kubenswrapper[5130]: I1212 16:28:09.262083 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:09 crc kubenswrapper[5130]: I1212 16:28:09.317690 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.514625046 podStartE2EDuration="38.317667455s" podCreationTimestamp="2025-12-12 16:27:31 +0000 UTC" firstStartedPulling="2025-12-12 16:27:32.088491532 +0000 UTC m=+751.986166364" lastFinishedPulling="2025-12-12 16:28:00.891533941 +0000 UTC m=+780.789208773" observedRunningTime="2025-12-12 16:28:09.312553495 +0000 UTC m=+789.210228327" watchObservedRunningTime="2025-12-12 16:28:09.317667455 +0000 UTC m=+789.215342287" Dec 12 16:28:14 crc kubenswrapper[5130]: I1212 16:28:14.266605 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-2kmrt" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.114981 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.116982 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="extract-utilities" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117045 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="extract-utilities" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117086 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="registry-server" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117095 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="registry-server" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117110 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="extract-content" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117117 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="extract-content" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.117423 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="098dcbcc-c98d-4de4-9c46-f40973d5ca17" containerName="registry-server" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.210970 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.211134 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.230620 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.230676 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.230676 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.231102 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.231516 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ff94g\"" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321106 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321163 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321361 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321475 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321557 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5c7\" (UniqueName: \"kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321682 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321714 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321746 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321765 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321788 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321851 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.321965 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.322061 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424147 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424273 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424315 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424353 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424379 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424426 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424469 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424502 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424547 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424586 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424632 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424660 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.424698 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5c7\" (UniqueName: \"kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.425243 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.425442 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.427030 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.427761 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.428449 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.428475 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.428621 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.433552 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.435959 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.440028 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.440489 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.443671 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5c7\" (UniqueName: \"kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.449092 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.535558 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:18 crc kubenswrapper[5130]: I1212 16:28:18.842204 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:18 crc kubenswrapper[5130]: W1212 16:28:18.846055 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48700ccb_8fc3_4b07_af36_f0ff8573dc6a.slice/crio-1b850b36a6d06f6c415a15e0f884b2a0ab03b742b3c4517bf1f07e3c6aaca997 WatchSource:0}: Error finding container 1b850b36a6d06f6c415a15e0f884b2a0ab03b742b3c4517bf1f07e3c6aaca997: Status 404 returned error can't find the container with id 1b850b36a6d06f6c415a15e0f884b2a0ab03b742b3c4517bf1f07e3c6aaca997 Dec 12 16:28:19 crc kubenswrapper[5130]: I1212 16:28:19.374953 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"48700ccb-8fc3-4b07-af36-f0ff8573dc6a","Type":"ContainerStarted","Data":"1b850b36a6d06f6c415a15e0f884b2a0ab03b742b3c4517bf1f07e3c6aaca997"} Dec 12 16:28:20 crc kubenswrapper[5130]: I1212 16:28:20.347667 5130 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="8b73b1a4-74b4-4b36-9c02-328f2cc9b99a" containerName="elasticsearch" probeResult="failure" output=< Dec 12 16:28:20 crc kubenswrapper[5130]: {"timestamp": "2025-12-12T16:28:20+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 12 16:28:20 crc kubenswrapper[5130]: > Dec 12 16:28:24 crc kubenswrapper[5130]: I1212 16:28:24.423633 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"48700ccb-8fc3-4b07-af36-f0ff8573dc6a","Type":"ContainerStarted","Data":"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4"} Dec 12 16:28:24 crc kubenswrapper[5130]: I1212 16:28:24.479956 5130 ???:1] "http: TLS handshake error from 192.168.126.11:40570: no serving certificate available for the kubelet" Dec 12 16:28:25 crc kubenswrapper[5130]: I1212 16:28:25.511746 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:25 crc kubenswrapper[5130]: I1212 16:28:25.902231 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.438496 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" containerName="git-clone" containerID="cri-o://41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4" gracePeriod=30 Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.864005 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_48700ccb-8fc3-4b07-af36-f0ff8573dc6a/git-clone/0.log" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.864547 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966582 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966694 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966727 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5c7\" (UniqueName: \"kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966772 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966834 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966861 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966906 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.966964 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.967001 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.967027 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.967058 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.967101 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.967140 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push\") pod \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\" (UID: \"48700ccb-8fc3-4b07-af36-f0ff8573dc6a\") " Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.968813 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.969262 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.971608 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.971689 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.971967 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.972358 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.972381 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.972769 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.973035 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.977347 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull" (OuterVolumeSpecName: "builder-dockercfg-ff94g-pull") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "builder-dockercfg-ff94g-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.977391 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.977370 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push" (OuterVolumeSpecName: "builder-dockercfg-ff94g-push") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "builder-dockercfg-ff94g-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:26 crc kubenswrapper[5130]: I1212 16:28:26.977476 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7" (OuterVolumeSpecName: "kube-api-access-tt5c7") pod "48700ccb-8fc3-4b07-af36-f0ff8573dc6a" (UID: "48700ccb-8fc3-4b07-af36-f0ff8573dc6a"). InnerVolumeSpecName "kube-api-access-tt5c7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069496 5130 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069556 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tt5c7\" (UniqueName: \"kubernetes.io/projected/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-kube-api-access-tt5c7\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069573 5130 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069587 5130 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069600 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-pull\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069617 5130 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069632 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069647 5130 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069660 5130 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069673 5130 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069687 5130 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069704 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-builder-dockercfg-ff94g-push\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.069720 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48700ccb-8fc3-4b07-af36-f0ff8573dc6a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447596 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_48700ccb-8fc3-4b07-af36-f0ff8573dc6a/git-clone/0.log" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447667 5130 generic.go:358] "Generic (PLEG): container finished" podID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" containerID="41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4" exitCode=1 Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447808 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447805 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"48700ccb-8fc3-4b07-af36-f0ff8573dc6a","Type":"ContainerDied","Data":"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4"} Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447932 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"48700ccb-8fc3-4b07-af36-f0ff8573dc6a","Type":"ContainerDied","Data":"1b850b36a6d06f6c415a15e0f884b2a0ab03b742b3c4517bf1f07e3c6aaca997"} Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.447955 5130 scope.go:117] "RemoveContainer" containerID="41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.486153 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.492033 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.492137 5130 scope.go:117] "RemoveContainer" containerID="41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4" Dec 12 16:28:27 crc kubenswrapper[5130]: E1212 16:28:27.494093 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4\": container with ID starting with 41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4 not found: ID does not exist" containerID="41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4" Dec 12 16:28:27 crc kubenswrapper[5130]: I1212 16:28:27.494212 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4"} err="failed to get container status \"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4\": rpc error: code = NotFound desc = could not find container \"41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4\": container with ID starting with 41a40f67e0849d941100e26ceb7c2627f6baea368bfbcda4c304d79b3c80c0e4 not found: ID does not exist" Dec 12 16:28:28 crc kubenswrapper[5130]: I1212 16:28:28.392280 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" path="/var/lib/kubelet/pods/48700ccb-8fc3-4b07-af36-f0ff8573dc6a/volumes" Dec 12 16:28:36 crc kubenswrapper[5130]: I1212 16:28:36.951925 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:28:36 crc kubenswrapper[5130]: I1212 16:28:36.953641 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" containerName="git-clone" Dec 12 16:28:36 crc kubenswrapper[5130]: I1212 16:28:36.953662 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" containerName="git-clone" Dec 12 16:28:36 crc kubenswrapper[5130]: I1212 16:28:36.953794 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="48700ccb-8fc3-4b07-af36-f0ff8573dc6a" containerName="git-clone" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.350207 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.350404 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.353322 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.353566 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.354090 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ff94g\"" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.354463 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.354552 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.498437 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.498626 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.498786 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499061 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499306 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499447 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499522 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499690 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499811 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.499927 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvsg\" (UniqueName: \"kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.500025 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.500150 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.500277 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601753 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601818 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601843 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601859 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601883 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjvsg\" (UniqueName: \"kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601907 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601932 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601980 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602007 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602035 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602069 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602088 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602143 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602592 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.601760 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.602817 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.603091 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.603140 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.603696 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.603948 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.604277 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.604770 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.616239 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.619476 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.620410 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.622869 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjvsg\" (UniqueName: \"kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg\") pod \"service-telemetry-framework-index-2-build\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:40 crc kubenswrapper[5130]: I1212 16:28:40.672131 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:41 crc kubenswrapper[5130]: I1212 16:28:41.112969 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:28:41 crc kubenswrapper[5130]: I1212 16:28:41.559997 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"a0bdf470-1147-4c82-95ba-9d4b8c87f076","Type":"ContainerStarted","Data":"64676e9c945d07cb91caa2a63ef6e06fc4232117e85b1e468f1091c7327252b2"} Dec 12 16:28:48 crc kubenswrapper[5130]: I1212 16:28:48.614303 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"a0bdf470-1147-4c82-95ba-9d4b8c87f076","Type":"ContainerStarted","Data":"7eca3f2bc00a37129e52e192590861ded3f3f96654c5577e5f531a0e332a18e9"} Dec 12 16:28:49 crc kubenswrapper[5130]: I1212 16:28:49.672458 5130 ???:1] "http: TLS handshake error from 192.168.126.11:54714: no serving certificate available for the kubelet" Dec 12 16:28:50 crc kubenswrapper[5130]: I1212 16:28:50.703479 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:28:50 crc kubenswrapper[5130]: I1212 16:28:50.703789 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" containerName="git-clone" containerID="cri-o://7eca3f2bc00a37129e52e192590861ded3f3f96654c5577e5f531a0e332a18e9" gracePeriod=30 Dec 12 16:28:52 crc kubenswrapper[5130]: I1212 16:28:52.730549 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:28:52 crc kubenswrapper[5130]: I1212 16:28:52.731116 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:28:53 crc kubenswrapper[5130]: I1212 16:28:53.651164 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_a0bdf470-1147-4c82-95ba-9d4b8c87f076/git-clone/0.log" Dec 12 16:28:53 crc kubenswrapper[5130]: I1212 16:28:53.651237 5130 generic.go:358] "Generic (PLEG): container finished" podID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" containerID="7eca3f2bc00a37129e52e192590861ded3f3f96654c5577e5f531a0e332a18e9" exitCode=1 Dec 12 16:28:53 crc kubenswrapper[5130]: I1212 16:28:53.651307 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"a0bdf470-1147-4c82-95ba-9d4b8c87f076","Type":"ContainerDied","Data":"7eca3f2bc00a37129e52e192590861ded3f3f96654c5577e5f531a0e332a18e9"} Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.804204 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_a0bdf470-1147-4c82-95ba-9d4b8c87f076/git-clone/0.log" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.805238 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.928954 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929034 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929053 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929072 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929308 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929365 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929411 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929438 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929485 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929529 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvsg\" (UniqueName: \"kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929570 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929628 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.929692 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets\") pod \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\" (UID: \"a0bdf470-1147-4c82-95ba-9d4b8c87f076\") " Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930012 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930057 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930049 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930128 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930313 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930375 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930472 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.930841 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.931293 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.937298 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull" (OuterVolumeSpecName: "builder-dockercfg-ff94g-pull") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "builder-dockercfg-ff94g-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.937367 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.937429 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push" (OuterVolumeSpecName: "builder-dockercfg-ff94g-push") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "builder-dockercfg-ff94g-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:28:59 crc kubenswrapper[5130]: I1212 16:28:59.938362 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg" (OuterVolumeSpecName: "kube-api-access-gjvsg") pod "a0bdf470-1147-4c82-95ba-9d4b8c87f076" (UID: "a0bdf470-1147-4c82-95ba-9d4b8c87f076"). InnerVolumeSpecName "kube-api-access-gjvsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031827 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-push\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031879 5130 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031893 5130 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0bdf470-1147-4c82-95ba-9d4b8c87f076-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031906 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-builder-dockercfg-ff94g-pull\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031923 5130 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031936 5130 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031948 5130 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/a0bdf470-1147-4c82-95ba-9d4b8c87f076-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031965 5130 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031975 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031985 5130 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.031995 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a0bdf470-1147-4c82-95ba-9d4b8c87f076-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.032004 5130 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a0bdf470-1147-4c82-95ba-9d4b8c87f076-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.032013 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjvsg\" (UniqueName: \"kubernetes.io/projected/a0bdf470-1147-4c82-95ba-9d4b8c87f076-kube-api-access-gjvsg\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.708297 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_a0bdf470-1147-4c82-95ba-9d4b8c87f076/git-clone/0.log" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.708424 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"a0bdf470-1147-4c82-95ba-9d4b8c87f076","Type":"ContainerDied","Data":"64676e9c945d07cb91caa2a63ef6e06fc4232117e85b1e468f1091c7327252b2"} Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.708487 5130 scope.go:117] "RemoveContainer" containerID="7eca3f2bc00a37129e52e192590861ded3f3f96654c5577e5f531a0e332a18e9" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.708705 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.741680 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:29:00 crc kubenswrapper[5130]: I1212 16:29:00.750071 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 12 16:29:02 crc kubenswrapper[5130]: I1212 16:29:02.163838 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:02 crc kubenswrapper[5130]: I1212 16:29:02.164911 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" containerName="git-clone" Dec 12 16:29:02 crc kubenswrapper[5130]: I1212 16:29:02.164924 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" containerName="git-clone" Dec 12 16:29:02 crc kubenswrapper[5130]: I1212 16:29:02.165341 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" containerName="git-clone" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.304373 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.304607 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.307793 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.307959 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ff94g\"" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.308049 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.308826 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.311294 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.316797 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bdf470-1147-4c82-95ba-9d4b8c87f076" path="/var/lib/kubelet/pods/a0bdf470-1147-4c82-95ba-9d4b8c87f076/volumes" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.381503 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.381573 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.381637 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.381882 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q95fz\" (UniqueName: \"kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.381968 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382013 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382057 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382158 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382201 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382262 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382297 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382343 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.382389 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483114 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483199 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483231 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483248 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q95fz\" (UniqueName: \"kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483274 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483295 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483335 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483390 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483420 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483455 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483482 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483515 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483548 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483681 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483753 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.483905 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484074 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484138 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484244 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484371 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484746 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.484908 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.491658 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.492025 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.504650 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q95fz\" (UniqueName: \"kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.505049 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.623468 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:03 crc kubenswrapper[5130]: I1212 16:29:03.858701 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:04 crc kubenswrapper[5130]: I1212 16:29:04.743229 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"cc0b2c0b-41d6-47c9-9812-9f70b101293e","Type":"ContainerStarted","Data":"22a3d8dbda1abf7ee9c885dfd7e1ec87a4a8e2d57b94763dda6f0c1b4996ba68"} Dec 12 16:29:05 crc kubenswrapper[5130]: I1212 16:29:05.750358 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"cc0b2c0b-41d6-47c9-9812-9f70b101293e","Type":"ContainerStarted","Data":"3ad90f697a7d58da32460609e56aba46a96c085b52ceb8a595f06350f3163e2e"} Dec 12 16:29:05 crc kubenswrapper[5130]: I1212 16:29:05.799484 5130 ???:1] "http: TLS handshake error from 192.168.126.11:36848: no serving certificate available for the kubelet" Dec 12 16:29:06 crc kubenswrapper[5130]: I1212 16:29:06.833363 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:07 crc kubenswrapper[5130]: I1212 16:29:07.774514 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" containerName="git-clone" containerID="cri-o://3ad90f697a7d58da32460609e56aba46a96c085b52ceb8a595f06350f3163e2e" gracePeriod=30 Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.815490 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_cc0b2c0b-41d6-47c9-9812-9f70b101293e/git-clone/0.log" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.815757 5130 generic.go:358] "Generic (PLEG): container finished" podID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" containerID="3ad90f697a7d58da32460609e56aba46a96c085b52ceb8a595f06350f3163e2e" exitCode=1 Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.815799 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"cc0b2c0b-41d6-47c9-9812-9f70b101293e","Type":"ContainerDied","Data":"3ad90f697a7d58da32460609e56aba46a96c085b52ceb8a595f06350f3163e2e"} Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.845318 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_cc0b2c0b-41d6-47c9-9812-9f70b101293e/git-clone/0.log" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.845395 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962282 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q95fz\" (UniqueName: \"kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962367 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962408 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962436 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962502 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962585 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962608 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962636 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962671 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962712 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962802 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962833 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962855 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.962877 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root\") pod \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\" (UID: \"cc0b2c0b-41d6-47c9-9812-9f70b101293e\") " Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963039 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963078 5130 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963266 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963363 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963385 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963648 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963673 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.963922 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.964173 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.967786 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push" (OuterVolumeSpecName: "builder-dockercfg-ff94g-push") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "builder-dockercfg-ff94g-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.968201 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull" (OuterVolumeSpecName: "builder-dockercfg-ff94g-pull") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "builder-dockercfg-ff94g-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.968220 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:08 crc kubenswrapper[5130]: I1212 16:29:08.968447 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz" (OuterVolumeSpecName: "kube-api-access-q95fz") pod "cc0b2c0b-41d6-47c9-9812-9f70b101293e" (UID: "cc0b2c0b-41d6-47c9-9812-9f70b101293e"). InnerVolumeSpecName "kube-api-access-q95fz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064880 5130 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064910 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064922 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-push\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064933 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064942 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q95fz\" (UniqueName: \"kubernetes.io/projected/cc0b2c0b-41d6-47c9-9812-9f70b101293e-kube-api-access-q95fz\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064952 5130 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064961 5130 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064970 5130 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064980 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/cc0b2c0b-41d6-47c9-9812-9f70b101293e-builder-dockercfg-ff94g-pull\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064989 5130 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.064997 5130 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc0b2c0b-41d6-47c9-9812-9f70b101293e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.065004 5130 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc0b2c0b-41d6-47c9-9812-9f70b101293e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.823128 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_cc0b2c0b-41d6-47c9-9812-9f70b101293e/git-clone/0.log" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.823346 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.823373 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"cc0b2c0b-41d6-47c9-9812-9f70b101293e","Type":"ContainerDied","Data":"22a3d8dbda1abf7ee9c885dfd7e1ec87a4a8e2d57b94763dda6f0c1b4996ba68"} Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.823460 5130 scope.go:117] "RemoveContainer" containerID="3ad90f697a7d58da32460609e56aba46a96c085b52ceb8a595f06350f3163e2e" Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.859127 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:09 crc kubenswrapper[5130]: I1212 16:29:09.864386 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 12 16:29:10 crc kubenswrapper[5130]: I1212 16:29:10.381111 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" path="/var/lib/kubelet/pods/cc0b2c0b-41d6-47c9-9812-9f70b101293e/volumes" Dec 12 16:29:18 crc kubenswrapper[5130]: I1212 16:29:18.257526 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:18 crc kubenswrapper[5130]: I1212 16:29:18.260158 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" containerName="git-clone" Dec 12 16:29:18 crc kubenswrapper[5130]: I1212 16:29:18.260225 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" containerName="git-clone" Dec 12 16:29:18 crc kubenswrapper[5130]: I1212 16:29:18.260403 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc0b2c0b-41d6-47c9-9812-9f70b101293e" containerName="git-clone" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.105793 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.108803 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.109543 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-ff94g\"" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.109859 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.110071 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.110405 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.121419 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.226926 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227039 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227099 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227132 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227396 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227593 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227688 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227729 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.227802 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.228016 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.228071 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.228117 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.228151 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg88x\" (UniqueName: \"kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.329942 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330009 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330294 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330433 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330561 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330595 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330690 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330769 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wg88x\" (UniqueName: \"kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330910 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330945 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330924 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.330917 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331093 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331156 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331359 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331494 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331555 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.331687 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.332099 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.332593 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.332957 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.333572 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.339922 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.339922 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.340849 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.352718 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg88x\" (UniqueName: \"kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x\") pod \"service-telemetry-framework-index-4-build\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.426915 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.657538 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:19 crc kubenswrapper[5130]: I1212 16:29:19.902649 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"df3b5ffb-d260-40e8-bd13-ce6656fca9e0","Type":"ContainerStarted","Data":"e72c026b7694679c3c7620432ab3882148147f8b41516626acda65a5bf9bd710"} Dec 12 16:29:20 crc kubenswrapper[5130]: I1212 16:29:20.913102 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"df3b5ffb-d260-40e8-bd13-ce6656fca9e0","Type":"ContainerStarted","Data":"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303"} Dec 12 16:29:20 crc kubenswrapper[5130]: I1212 16:29:20.973867 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38888: no serving certificate available for the kubelet" Dec 12 16:29:22 crc kubenswrapper[5130]: I1212 16:29:22.004878 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:22 crc kubenswrapper[5130]: I1212 16:29:22.730664 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:29:22 crc kubenswrapper[5130]: I1212 16:29:22.730742 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:29:22 crc kubenswrapper[5130]: I1212 16:29:22.925420 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" containerName="git-clone" containerID="cri-o://6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303" gracePeriod=30 Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.250050 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.260483 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.263347 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.264622 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-n6ssc\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.295273 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd628\" (UniqueName: \"kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628\") pod \"infrawatch-operators-cj72z\" (UID: \"896500d1-8185-4d67-9e0d-c837eba1a9d1\") " pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.360838 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_df3b5ffb-d260-40e8-bd13-ce6656fca9e0/git-clone/0.log" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.360916 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397388 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397466 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397653 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397760 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397815 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397858 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397908 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397936 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.397978 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398037 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398065 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg88x\" (UniqueName: \"kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398081 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398243 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398282 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles\") pod \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\" (UID: \"df3b5ffb-d260-40e8-bd13-ce6656fca9e0\") " Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398391 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398393 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398431 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398443 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398413 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398563 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sd628\" (UniqueName: \"kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628\") pod \"infrawatch-operators-cj72z\" (UID: \"896500d1-8185-4d67-9e0d-c837eba1a9d1\") " pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.398972 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399045 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399241 5130 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399263 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399275 5130 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399284 5130 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399292 5130 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399300 5130 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399308 5130 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399316 5130 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.399524 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.406425 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.406462 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull" (OuterVolumeSpecName: "builder-dockercfg-ff94g-pull") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "builder-dockercfg-ff94g-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.406552 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x" (OuterVolumeSpecName: "kube-api-access-wg88x") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "kube-api-access-wg88x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.406758 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push" (OuterVolumeSpecName: "builder-dockercfg-ff94g-push") pod "df3b5ffb-d260-40e8-bd13-ce6656fca9e0" (UID: "df3b5ffb-d260-40e8-bd13-ce6656fca9e0"). InnerVolumeSpecName "builder-dockercfg-ff94g-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.418776 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd628\" (UniqueName: \"kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628\") pod \"infrawatch-operators-cj72z\" (UID: \"896500d1-8185-4d67-9e0d-c837eba1a9d1\") " pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.501062 5130 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.501442 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-pull\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-pull\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.501462 5130 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.501478 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wg88x\" (UniqueName: \"kubernetes.io/projected/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-kube-api-access-wg88x\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.501489 5130 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-ff94g-push\" (UniqueName: \"kubernetes.io/secret/df3b5ffb-d260-40e8-bd13-ce6656fca9e0-builder-dockercfg-ff94g-push\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.590868 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.932931 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_df3b5ffb-d260-40e8-bd13-ce6656fca9e0/git-clone/0.log" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.932975 5130 generic.go:358] "Generic (PLEG): container finished" podID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" containerID="6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303" exitCode=1 Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.933053 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"df3b5ffb-d260-40e8-bd13-ce6656fca9e0","Type":"ContainerDied","Data":"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303"} Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.933081 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"df3b5ffb-d260-40e8-bd13-ce6656fca9e0","Type":"ContainerDied","Data":"e72c026b7694679c3c7620432ab3882148147f8b41516626acda65a5bf9bd710"} Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.933096 5130 scope.go:117] "RemoveContainer" containerID="6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.933102 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.950059 5130 scope.go:117] "RemoveContainer" containerID="6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303" Dec 12 16:29:23 crc kubenswrapper[5130]: E1212 16:29:23.950617 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303\": container with ID starting with 6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303 not found: ID does not exist" containerID="6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.950670 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303"} err="failed to get container status \"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303\": rpc error: code = NotFound desc = could not find container \"6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303\": container with ID starting with 6775428952cb1380c03b41881473dbf745a295bb748b303741e8eb62b3b9c303 not found: ID does not exist" Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.968562 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:23 crc kubenswrapper[5130]: I1212 16:29:23.976116 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 12 16:29:24 crc kubenswrapper[5130]: I1212 16:29:24.017856 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:24 crc kubenswrapper[5130]: W1212 16:29:24.019515 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod896500d1_8185_4d67_9e0d_c837eba1a9d1.slice/crio-05fb2c827c084b307ef59d3eee3d07f35dabc8eee4b06b1b1986e20b9cedeeb2 WatchSource:0}: Error finding container 05fb2c827c084b307ef59d3eee3d07f35dabc8eee4b06b1b1986e20b9cedeeb2: Status 404 returned error can't find the container with id 05fb2c827c084b307ef59d3eee3d07f35dabc8eee4b06b1b1986e20b9cedeeb2 Dec 12 16:29:24 crc kubenswrapper[5130]: E1212 16:29:24.085311 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:29:24 crc kubenswrapper[5130]: E1212 16:29:24.085843 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sd628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cj72z_service-telemetry(896500d1-8185-4d67-9e0d-c837eba1a9d1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:29:24 crc kubenswrapper[5130]: E1212 16:29:24.087145 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cj72z" podUID="896500d1-8185-4d67-9e0d-c837eba1a9d1" Dec 12 16:29:24 crc kubenswrapper[5130]: I1212 16:29:24.376805 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" path="/var/lib/kubelet/pods/df3b5ffb-d260-40e8-bd13-ce6656fca9e0/volumes" Dec 12 16:29:24 crc kubenswrapper[5130]: I1212 16:29:24.940378 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-cj72z" event={"ID":"896500d1-8185-4d67-9e0d-c837eba1a9d1","Type":"ContainerStarted","Data":"05fb2c827c084b307ef59d3eee3d07f35dabc8eee4b06b1b1986e20b9cedeeb2"} Dec 12 16:29:24 crc kubenswrapper[5130]: E1212 16:29:24.941819 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cj72z" podUID="896500d1-8185-4d67-9e0d-c837eba1a9d1" Dec 12 16:29:25 crc kubenswrapper[5130]: E1212 16:29:25.951303 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cj72z" podUID="896500d1-8185-4d67-9e0d-c837eba1a9d1" Dec 12 16:29:27 crc kubenswrapper[5130]: I1212 16:29:27.845323 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.119461 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.170991 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd628\" (UniqueName: \"kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628\") pod \"896500d1-8185-4d67-9e0d-c837eba1a9d1\" (UID: \"896500d1-8185-4d67-9e0d-c837eba1a9d1\") " Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.176606 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628" (OuterVolumeSpecName: "kube-api-access-sd628") pod "896500d1-8185-4d67-9e0d-c837eba1a9d1" (UID: "896500d1-8185-4d67-9e0d-c837eba1a9d1"). InnerVolumeSpecName "kube-api-access-sd628". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.272763 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sd628\" (UniqueName: \"kubernetes.io/projected/896500d1-8185-4d67-9e0d-c837eba1a9d1-kube-api-access-sd628\") on node \"crc\" DevicePath \"\"" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.650334 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-cdpts"] Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.651492 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" containerName="git-clone" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.651518 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" containerName="git-clone" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.651713 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="df3b5ffb-d260-40e8-bd13-ce6656fca9e0" containerName="git-clone" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.714046 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cdpts"] Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.714265 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cdpts" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.777807 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/eeed1a9b-f386-4d11-b730-03bcb44f9a55-kube-api-access-p4zc7\") pod \"infrawatch-operators-cdpts\" (UID: \"eeed1a9b-f386-4d11-b730-03bcb44f9a55\") " pod="service-telemetry/infrawatch-operators-cdpts" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.879781 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/eeed1a9b-f386-4d11-b730-03bcb44f9a55-kube-api-access-p4zc7\") pod \"infrawatch-operators-cdpts\" (UID: \"eeed1a9b-f386-4d11-b730-03bcb44f9a55\") " pod="service-telemetry/infrawatch-operators-cdpts" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.900560 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/eeed1a9b-f386-4d11-b730-03bcb44f9a55-kube-api-access-p4zc7\") pod \"infrawatch-operators-cdpts\" (UID: \"eeed1a9b-f386-4d11-b730-03bcb44f9a55\") " pod="service-telemetry/infrawatch-operators-cdpts" Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.966993 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-cj72z" event={"ID":"896500d1-8185-4d67-9e0d-c837eba1a9d1","Type":"ContainerDied","Data":"05fb2c827c084b307ef59d3eee3d07f35dabc8eee4b06b1b1986e20b9cedeeb2"} Dec 12 16:29:28 crc kubenswrapper[5130]: I1212 16:29:28.967042 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cj72z" Dec 12 16:29:29 crc kubenswrapper[5130]: I1212 16:29:29.001110 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:29 crc kubenswrapper[5130]: I1212 16:29:29.007346 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-cj72z"] Dec 12 16:29:29 crc kubenswrapper[5130]: I1212 16:29:29.027826 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cdpts" Dec 12 16:29:29 crc kubenswrapper[5130]: I1212 16:29:29.226254 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cdpts"] Dec 12 16:29:29 crc kubenswrapper[5130]: E1212 16:29:29.288520 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:29:29 crc kubenswrapper[5130]: E1212 16:29:29.289514 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:29:29 crc kubenswrapper[5130]: E1212 16:29:29.290962 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:29:29 crc kubenswrapper[5130]: I1212 16:29:29.977390 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-cdpts" event={"ID":"eeed1a9b-f386-4d11-b730-03bcb44f9a55","Type":"ContainerStarted","Data":"264c7342999e104f19f407c20d616e113a7b10528040fa13b52d8cb14847e428"} Dec 12 16:29:29 crc kubenswrapper[5130]: E1212 16:29:29.979456 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:29:30 crc kubenswrapper[5130]: I1212 16:29:30.379693 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896500d1-8185-4d67-9e0d-c837eba1a9d1" path="/var/lib/kubelet/pods/896500d1-8185-4d67-9e0d-c837eba1a9d1/volumes" Dec 12 16:29:30 crc kubenswrapper[5130]: E1212 16:29:30.986572 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:29:46 crc kubenswrapper[5130]: E1212 16:29:46.439774 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:29:46 crc kubenswrapper[5130]: E1212 16:29:46.440793 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:29:46 crc kubenswrapper[5130]: E1212 16:29:46.442043 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:29:52 crc kubenswrapper[5130]: I1212 16:29:52.730231 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:29:52 crc kubenswrapper[5130]: I1212 16:29:52.731066 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:29:52 crc kubenswrapper[5130]: I1212 16:29:52.731148 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:29:52 crc kubenswrapper[5130]: I1212 16:29:52.732203 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:29:52 crc kubenswrapper[5130]: I1212 16:29:52.732308 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e" gracePeriod=600 Dec 12 16:29:53 crc kubenswrapper[5130]: I1212 16:29:53.158934 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e" exitCode=0 Dec 12 16:29:53 crc kubenswrapper[5130]: I1212 16:29:53.159000 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e"} Dec 12 16:29:53 crc kubenswrapper[5130]: I1212 16:29:53.159069 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59"} Dec 12 16:29:53 crc kubenswrapper[5130]: I1212 16:29:53.159100 5130 scope.go:117] "RemoveContainer" containerID="456c71e76ba0cd0d996bbd0f00a10ca55a78f35663150737c8d410c0007a70cd" Dec 12 16:29:57 crc kubenswrapper[5130]: E1212 16:29:57.370238 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.160962 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh"] Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.187070 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh"] Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.187472 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.191285 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.192371 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.263859 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.263974 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.264003 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwxm\" (UniqueName: \"kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.364873 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.364941 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.364965 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmwxm\" (UniqueName: \"kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.368742 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.373672 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.377133 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.387448 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmwxm\" (UniqueName: \"kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm\") pod \"collect-profiles-29425950-g52jh\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.520441 5130 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.529275 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.717475 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.732525 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.746620 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.758029 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:30:00 crc kubenswrapper[5130]: I1212 16:30:00.969214 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh"] Dec 12 16:30:00 crc kubenswrapper[5130]: W1212 16:30:00.974910 5130 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab30f5e0_5097_4413_bb3e_fe8ca350378f.slice/crio-766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4 WatchSource:0}: Error finding container 766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4: Status 404 returned error can't find the container with id 766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4 Dec 12 16:30:01 crc kubenswrapper[5130]: I1212 16:30:01.220408 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" event={"ID":"ab30f5e0-5097-4413-bb3e-fe8ca350378f","Type":"ContainerStarted","Data":"453898f774a7d00547807d5aa1e562cba3963d78360db665aa9a8dbb49da773a"} Dec 12 16:30:01 crc kubenswrapper[5130]: I1212 16:30:01.220498 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" event={"ID":"ab30f5e0-5097-4413-bb3e-fe8ca350378f","Type":"ContainerStarted","Data":"766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4"} Dec 12 16:30:01 crc kubenswrapper[5130]: I1212 16:30:01.238201 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" podStartSLOduration=1.238166844 podStartE2EDuration="1.238166844s" podCreationTimestamp="2025-12-12 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 16:30:01.23483959 +0000 UTC m=+901.132514422" watchObservedRunningTime="2025-12-12 16:30:01.238166844 +0000 UTC m=+901.135841676" Dec 12 16:30:02 crc kubenswrapper[5130]: I1212 16:30:02.229868 5130 generic.go:358] "Generic (PLEG): container finished" podID="ab30f5e0-5097-4413-bb3e-fe8ca350378f" containerID="453898f774a7d00547807d5aa1e562cba3963d78360db665aa9a8dbb49da773a" exitCode=0 Dec 12 16:30:02 crc kubenswrapper[5130]: I1212 16:30:02.229950 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" event={"ID":"ab30f5e0-5097-4413-bb3e-fe8ca350378f","Type":"ContainerDied","Data":"453898f774a7d00547807d5aa1e562cba3963d78360db665aa9a8dbb49da773a"} Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.477264 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.618316 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume\") pod \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.618394 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmwxm\" (UniqueName: \"kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm\") pod \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.618495 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume\") pod \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\" (UID: \"ab30f5e0-5097-4413-bb3e-fe8ca350378f\") " Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.619506 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab30f5e0-5097-4413-bb3e-fe8ca350378f" (UID: "ab30f5e0-5097-4413-bb3e-fe8ca350378f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.625634 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm" (OuterVolumeSpecName: "kube-api-access-hmwxm") pod "ab30f5e0-5097-4413-bb3e-fe8ca350378f" (UID: "ab30f5e0-5097-4413-bb3e-fe8ca350378f"). InnerVolumeSpecName "kube-api-access-hmwxm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.625667 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab30f5e0-5097-4413-bb3e-fe8ca350378f" (UID: "ab30f5e0-5097-4413-bb3e-fe8ca350378f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.720547 5130 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab30f5e0-5097-4413-bb3e-fe8ca350378f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.720898 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hmwxm\" (UniqueName: \"kubernetes.io/projected/ab30f5e0-5097-4413-bb3e-fe8ca350378f-kube-api-access-hmwxm\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:03 crc kubenswrapper[5130]: I1212 16:30:03.720969 5130 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab30f5e0-5097-4413-bb3e-fe8ca350378f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 12 16:30:04 crc kubenswrapper[5130]: I1212 16:30:04.245686 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" event={"ID":"ab30f5e0-5097-4413-bb3e-fe8ca350378f","Type":"ContainerDied","Data":"766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4"} Dec 12 16:30:04 crc kubenswrapper[5130]: I1212 16:30:04.245736 5130 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766e3243f51588112a816c6f02f6d7f3501538eef1dfb32bee7ccca7116521e4" Dec 12 16:30:04 crc kubenswrapper[5130]: I1212 16:30:04.245759 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29425950-g52jh" Dec 12 16:30:09 crc kubenswrapper[5130]: E1212 16:30:09.429831 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:30:09 crc kubenswrapper[5130]: E1212 16:30:09.430367 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:30:09 crc kubenswrapper[5130]: E1212 16:30:09.431674 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:30:23 crc kubenswrapper[5130]: E1212 16:30:23.370566 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:30:38 crc kubenswrapper[5130]: E1212 16:30:38.370830 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:30:53 crc kubenswrapper[5130]: E1212 16:30:53.437799 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:30:53 crc kubenswrapper[5130]: E1212 16:30:53.439020 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:30:53 crc kubenswrapper[5130]: E1212 16:30:53.440490 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:31:04 crc kubenswrapper[5130]: E1212 16:31:04.371642 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:31:18 crc kubenswrapper[5130]: E1212 16:31:18.370946 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:31:29 crc kubenswrapper[5130]: E1212 16:31:29.396054 5130 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.560153 5130 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.572843 5130 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.594300 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38244: no serving certificate available for the kubelet" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.622972 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38250: no serving certificate available for the kubelet" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.657476 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38262: no serving certificate available for the kubelet" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.709025 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38266: no serving certificate available for the kubelet" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.779374 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38270: no serving certificate available for the kubelet" Dec 12 16:31:31 crc kubenswrapper[5130]: I1212 16:31:31.898536 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38278: no serving certificate available for the kubelet" Dec 12 16:31:32 crc kubenswrapper[5130]: I1212 16:31:32.087405 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38288: no serving certificate available for the kubelet" Dec 12 16:31:32 crc kubenswrapper[5130]: I1212 16:31:32.433402 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38292: no serving certificate available for the kubelet" Dec 12 16:31:33 crc kubenswrapper[5130]: I1212 16:31:33.103481 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38304: no serving certificate available for the kubelet" Dec 12 16:31:33 crc kubenswrapper[5130]: I1212 16:31:33.370988 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:31:33 crc kubenswrapper[5130]: E1212 16:31:33.371379 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:31:34 crc kubenswrapper[5130]: I1212 16:31:34.410701 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38308: no serving certificate available for the kubelet" Dec 12 16:31:36 crc kubenswrapper[5130]: I1212 16:31:36.994315 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38312: no serving certificate available for the kubelet" Dec 12 16:31:42 crc kubenswrapper[5130]: I1212 16:31:42.150774 5130 ???:1] "http: TLS handshake error from 192.168.126.11:56578: no serving certificate available for the kubelet" Dec 12 16:31:45 crc kubenswrapper[5130]: E1212 16:31:45.370529 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:31:52 crc kubenswrapper[5130]: I1212 16:31:52.424706 5130 ???:1] "http: TLS handshake error from 192.168.126.11:45290: no serving certificate available for the kubelet" Dec 12 16:32:00 crc kubenswrapper[5130]: E1212 16:32:00.377077 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:32:12 crc kubenswrapper[5130]: E1212 16:32:12.370660 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:32:12 crc kubenswrapper[5130]: I1212 16:32:12.942354 5130 ???:1] "http: TLS handshake error from 192.168.126.11:47210: no serving certificate available for the kubelet" Dec 12 16:32:22 crc kubenswrapper[5130]: I1212 16:32:22.729896 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:22 crc kubenswrapper[5130]: I1212 16:32:22.730794 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:26 crc kubenswrapper[5130]: E1212 16:32:26.440668 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:32:26 crc kubenswrapper[5130]: E1212 16:32:26.441014 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:32:26 crc kubenswrapper[5130]: E1212 16:32:26.442269 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:32:41 crc kubenswrapper[5130]: E1212 16:32:41.371923 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:32:52 crc kubenswrapper[5130]: I1212 16:32:52.730771 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:32:52 crc kubenswrapper[5130]: I1212 16:32:52.731796 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:32:53 crc kubenswrapper[5130]: E1212 16:32:53.371004 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:32:53 crc kubenswrapper[5130]: I1212 16:32:53.936863 5130 ???:1] "http: TLS handshake error from 192.168.126.11:46394: no serving certificate available for the kubelet" Dec 12 16:33:07 crc kubenswrapper[5130]: E1212 16:33:07.371124 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:33:07 crc kubenswrapper[5130]: I1212 16:33:07.841663 5130 scope.go:117] "RemoveContainer" containerID="cd14218a0a5eccabd6feefa3a694ba2b3d5b3b29968a3b7cb7037d7bcbfcaab7" Dec 12 16:33:07 crc kubenswrapper[5130]: I1212 16:33:07.870124 5130 scope.go:117] "RemoveContainer" containerID="109a5417fc5b240d74e50c2027f7b1468b267070f3fedd1a18f0a7ccc33b88a4" Dec 12 16:33:07 crc kubenswrapper[5130]: I1212 16:33:07.893473 5130 scope.go:117] "RemoveContainer" containerID="a35dd526ca4d2cdf3307d75472e2757ffbb122ab329a0106eeceb830dfe67dcd" Dec 12 16:33:21 crc kubenswrapper[5130]: E1212 16:33:21.370873 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:33:22 crc kubenswrapper[5130]: I1212 16:33:22.730494 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:33:22 crc kubenswrapper[5130]: I1212 16:33:22.730609 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:33:22 crc kubenswrapper[5130]: I1212 16:33:22.730678 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:33:22 crc kubenswrapper[5130]: I1212 16:33:22.731652 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:33:22 crc kubenswrapper[5130]: I1212 16:33:22.731743 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59" gracePeriod=600 Dec 12 16:33:23 crc kubenswrapper[5130]: I1212 16:33:23.746570 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59" exitCode=0 Dec 12 16:33:23 crc kubenswrapper[5130]: I1212 16:33:23.746608 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59"} Dec 12 16:33:23 crc kubenswrapper[5130]: I1212 16:33:23.747562 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8"} Dec 12 16:33:23 crc kubenswrapper[5130]: I1212 16:33:23.747611 5130 scope.go:117] "RemoveContainer" containerID="3adb890ff85b18dd025cb02aa6704930a7f2cdc1bd92119b5fe1c8a455d2a99e" Dec 12 16:33:33 crc kubenswrapper[5130]: E1212 16:33:33.370803 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:33:44 crc kubenswrapper[5130]: E1212 16:33:44.370792 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:33:56 crc kubenswrapper[5130]: E1212 16:33:56.370584 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:34:11 crc kubenswrapper[5130]: E1212 16:34:11.370772 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:34:15 crc kubenswrapper[5130]: I1212 16:34:15.890281 5130 ???:1] "http: TLS handshake error from 192.168.126.11:55690: no serving certificate available for the kubelet" Dec 12 16:34:24 crc kubenswrapper[5130]: E1212 16:34:24.371083 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.702536 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-6bs58"] Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.703263 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ab30f5e0-5097-4413-bb3e-fe8ca350378f" containerName="collect-profiles" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.703278 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30f5e0-5097-4413-bb3e-fe8ca350378f" containerName="collect-profiles" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.703399 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="ab30f5e0-5097-4413-bb3e-fe8ca350378f" containerName="collect-profiles" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.712559 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6bs58"] Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.712741 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6bs58" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.867902 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pzm\" (UniqueName: \"kubernetes.io/projected/6510d065-e486-4274-a8ca-4c2cdb8dd1ae-kube-api-access-q4pzm\") pod \"infrawatch-operators-6bs58\" (UID: \"6510d065-e486-4274-a8ca-4c2cdb8dd1ae\") " pod="service-telemetry/infrawatch-operators-6bs58" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.969887 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pzm\" (UniqueName: \"kubernetes.io/projected/6510d065-e486-4274-a8ca-4c2cdb8dd1ae-kube-api-access-q4pzm\") pod \"infrawatch-operators-6bs58\" (UID: \"6510d065-e486-4274-a8ca-4c2cdb8dd1ae\") " pod="service-telemetry/infrawatch-operators-6bs58" Dec 12 16:34:25 crc kubenswrapper[5130]: I1212 16:34:25.995630 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pzm\" (UniqueName: \"kubernetes.io/projected/6510d065-e486-4274-a8ca-4c2cdb8dd1ae-kube-api-access-q4pzm\") pod \"infrawatch-operators-6bs58\" (UID: \"6510d065-e486-4274-a8ca-4c2cdb8dd1ae\") " pod="service-telemetry/infrawatch-operators-6bs58" Dec 12 16:34:26 crc kubenswrapper[5130]: I1212 16:34:26.040302 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6bs58" Dec 12 16:34:26 crc kubenswrapper[5130]: I1212 16:34:26.278731 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6bs58"] Dec 12 16:34:26 crc kubenswrapper[5130]: E1212 16:34:26.359413 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:34:26 crc kubenswrapper[5130]: E1212 16:34:26.359623 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:34:26 crc kubenswrapper[5130]: E1212 16:34:26.360806 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:34:27 crc kubenswrapper[5130]: I1212 16:34:27.303838 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6bs58" event={"ID":"6510d065-e486-4274-a8ca-4c2cdb8dd1ae","Type":"ContainerStarted","Data":"32f2e1d4d60ac82efd45ae71478a261aa3a0041bccea7ebd07ee5c3e2380871a"} Dec 12 16:34:27 crc kubenswrapper[5130]: E1212 16:34:27.305090 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:34:28 crc kubenswrapper[5130]: E1212 16:34:28.313201 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:34:38 crc kubenswrapper[5130]: E1212 16:34:38.371015 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:34:39 crc kubenswrapper[5130]: E1212 16:34:39.432566 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:34:39 crc kubenswrapper[5130]: E1212 16:34:39.432833 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:34:39 crc kubenswrapper[5130]: E1212 16:34:39.434080 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:34:51 crc kubenswrapper[5130]: E1212 16:34:51.370482 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:34:51 crc kubenswrapper[5130]: E1212 16:34:51.370565 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:35:00 crc kubenswrapper[5130]: I1212 16:35:00.837778 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:35:00 crc kubenswrapper[5130]: I1212 16:35:00.844482 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:35:00 crc kubenswrapper[5130]: I1212 16:35:00.851974 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:35:00 crc kubenswrapper[5130]: I1212 16:35:00.854863 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:35:03 crc kubenswrapper[5130]: I1212 16:35:03.376257 5130 patch_prober.go:28] interesting pod/oauth-openshift-6567f5ffdb-jrpfr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded" start-of-body= Dec 12 16:35:03 crc kubenswrapper[5130]: I1212 16:35:03.377267 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-6567f5ffdb-jrpfr" podUID="5b0a332f-52bd-409b-b5c0-f2723c617bed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded" Dec 12 16:35:03 crc kubenswrapper[5130]: E1212 16:35:03.437603 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:35:03 crc kubenswrapper[5130]: E1212 16:35:03.438595 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:35:03 crc kubenswrapper[5130]: E1212 16:35:03.439832 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:35:04 crc kubenswrapper[5130]: E1212 16:35:04.370716 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:35:16 crc kubenswrapper[5130]: E1212 16:35:16.370448 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:35:17 crc kubenswrapper[5130]: E1212 16:35:17.431539 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:35:17 crc kubenswrapper[5130]: E1212 16:35:17.431914 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:35:17 crc kubenswrapper[5130]: E1212 16:35:17.433301 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:35:28 crc kubenswrapper[5130]: E1212 16:35:28.370302 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:35:31 crc kubenswrapper[5130]: E1212 16:35:31.370751 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:35:39 crc kubenswrapper[5130]: E1212 16:35:39.370484 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:35:45 crc kubenswrapper[5130]: E1212 16:35:45.454498 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:35:45 crc kubenswrapper[5130]: E1212 16:35:45.455203 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:35:45 crc kubenswrapper[5130]: E1212 16:35:45.456490 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:35:49 crc kubenswrapper[5130]: I1212 16:35:49.621248 5130 ???:1] "http: TLS handshake error from 192.168.126.11:34766: no serving certificate available for the kubelet" Dec 12 16:35:52 crc kubenswrapper[5130]: E1212 16:35:52.370359 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:35:52 crc kubenswrapper[5130]: I1212 16:35:52.730488 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:35:52 crc kubenswrapper[5130]: I1212 16:35:52.730630 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:35:57 crc kubenswrapper[5130]: E1212 16:35:57.370606 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:36:07 crc kubenswrapper[5130]: E1212 16:36:07.371146 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:36:11 crc kubenswrapper[5130]: E1212 16:36:11.370851 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:36:21 crc kubenswrapper[5130]: E1212 16:36:21.370318 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:36:22 crc kubenswrapper[5130]: I1212 16:36:22.730683 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:36:22 crc kubenswrapper[5130]: I1212 16:36:22.730856 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:25 crc kubenswrapper[5130]: E1212 16:36:25.371321 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:36:33 crc kubenswrapper[5130]: E1212 16:36:33.370894 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:36:36 crc kubenswrapper[5130]: I1212 16:36:36.370719 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:36:36 crc kubenswrapper[5130]: E1212 16:36:36.371096 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:36:44 crc kubenswrapper[5130]: E1212 16:36:44.371576 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:36:51 crc kubenswrapper[5130]: E1212 16:36:51.370311 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:36:52 crc kubenswrapper[5130]: I1212 16:36:52.730660 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:36:52 crc kubenswrapper[5130]: I1212 16:36:52.730783 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:36:52 crc kubenswrapper[5130]: I1212 16:36:52.730853 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:36:52 crc kubenswrapper[5130]: I1212 16:36:52.731924 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:36:52 crc kubenswrapper[5130]: I1212 16:36:52.731993 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8" gracePeriod=600 Dec 12 16:36:53 crc kubenswrapper[5130]: I1212 16:36:53.418090 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8" exitCode=0 Dec 12 16:36:53 crc kubenswrapper[5130]: I1212 16:36:53.418186 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8"} Dec 12 16:36:53 crc kubenswrapper[5130]: I1212 16:36:53.418705 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d"} Dec 12 16:36:53 crc kubenswrapper[5130]: I1212 16:36:53.418731 5130 scope.go:117] "RemoveContainer" containerID="dbf5bb6f7e04eed65e9d6c35b6039c8cb076ec0ac681151d1925ab21dbb68a59" Dec 12 16:36:57 crc kubenswrapper[5130]: E1212 16:36:57.370750 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:36:59 crc kubenswrapper[5130]: I1212 16:36:59.770276 5130 ???:1] "http: TLS handshake error from 192.168.126.11:52280: no serving certificate available for the kubelet" Dec 12 16:37:02 crc kubenswrapper[5130]: E1212 16:37:02.370696 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:37:12 crc kubenswrapper[5130]: E1212 16:37:12.370709 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.694447 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.702780 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.761683 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.769691 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.769785 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nh2\" (UniqueName: \"kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.769854 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.871888 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.872617 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nh2\" (UniqueName: \"kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.872700 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.872689 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.873239 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:13 crc kubenswrapper[5130]: I1212 16:37:13.903793 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nh2\" (UniqueName: \"kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2\") pod \"certified-operators-h46w2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:14 crc kubenswrapper[5130]: I1212 16:37:14.071441 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:14 crc kubenswrapper[5130]: I1212 16:37:14.277404 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:14 crc kubenswrapper[5130]: I1212 16:37:14.586230 5130 generic.go:358] "Generic (PLEG): container finished" podID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerID="134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278" exitCode=0 Dec 12 16:37:14 crc kubenswrapper[5130]: I1212 16:37:14.586398 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerDied","Data":"134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278"} Dec 12 16:37:14 crc kubenswrapper[5130]: I1212 16:37:14.586476 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerStarted","Data":"d1d4d05b754478a44a46557114554ff8d33446539ed1f76bfa1f327c87d9adee"} Dec 12 16:37:15 crc kubenswrapper[5130]: I1212 16:37:15.600808 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerStarted","Data":"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0"} Dec 12 16:37:16 crc kubenswrapper[5130]: I1212 16:37:16.609781 5130 generic.go:358] "Generic (PLEG): container finished" podID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerID="071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0" exitCode=0 Dec 12 16:37:16 crc kubenswrapper[5130]: I1212 16:37:16.609926 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerDied","Data":"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0"} Dec 12 16:37:17 crc kubenswrapper[5130]: E1212 16:37:17.455610 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:37:17 crc kubenswrapper[5130]: E1212 16:37:17.466445 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:37:17 crc kubenswrapper[5130]: E1212 16:37:17.467750 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:37:17 crc kubenswrapper[5130]: I1212 16:37:17.621484 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerStarted","Data":"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937"} Dec 12 16:37:17 crc kubenswrapper[5130]: I1212 16:37:17.645227 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h46w2" podStartSLOduration=3.950865921 podStartE2EDuration="4.645202857s" podCreationTimestamp="2025-12-12 16:37:13 +0000 UTC" firstStartedPulling="2025-12-12 16:37:14.588393384 +0000 UTC m=+1334.486068256" lastFinishedPulling="2025-12-12 16:37:15.28273036 +0000 UTC m=+1335.180405192" observedRunningTime="2025-12-12 16:37:17.641640798 +0000 UTC m=+1337.539315640" watchObservedRunningTime="2025-12-12 16:37:17.645202857 +0000 UTC m=+1337.542877699" Dec 12 16:37:22 crc kubenswrapper[5130]: I1212 16:37:22.548675 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.105361 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.105546 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.121473 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.121539 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.121796 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.223877 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.223972 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.224044 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.224934 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.224965 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.251649 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w\") pod \"redhat-operators-k5p4x\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.428622 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.698127 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:23 crc kubenswrapper[5130]: I1212 16:37:23.713724 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerStarted","Data":"970df140b69f5a985f67d38cf7877d73e620bd664c49cbc41174b5a595b00dcf"} Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.072477 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.073259 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.117379 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.726159 5130 generic.go:358] "Generic (PLEG): container finished" podID="6e1befc6-b980-4afa-ab59-48293a764532" containerID="99d1de38a68106a981893034823bcd09217f72693d98325f4b9026ff485b6962" exitCode=0 Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.726317 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerDied","Data":"99d1de38a68106a981893034823bcd09217f72693d98325f4b9026ff485b6962"} Dec 12 16:37:24 crc kubenswrapper[5130]: I1212 16:37:24.774290 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:25 crc kubenswrapper[5130]: I1212 16:37:25.737113 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerStarted","Data":"76a5062737bc9e902bd064bf949c75f800c4824e926de3e6a5160d08acf53329"} Dec 12 16:37:26 crc kubenswrapper[5130]: E1212 16:37:26.372267 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:37:26 crc kubenswrapper[5130]: I1212 16:37:26.467645 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:26 crc kubenswrapper[5130]: I1212 16:37:26.746505 5130 generic.go:358] "Generic (PLEG): container finished" podID="6e1befc6-b980-4afa-ab59-48293a764532" containerID="76a5062737bc9e902bd064bf949c75f800c4824e926de3e6a5160d08acf53329" exitCode=0 Dec 12 16:37:26 crc kubenswrapper[5130]: I1212 16:37:26.746592 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerDied","Data":"76a5062737bc9e902bd064bf949c75f800c4824e926de3e6a5160d08acf53329"} Dec 12 16:37:26 crc kubenswrapper[5130]: I1212 16:37:26.747064 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h46w2" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="registry-server" containerID="cri-o://e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937" gracePeriod=2 Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.214788 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.288104 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities\") pod \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.288258 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content\") pod \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.289874 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities" (OuterVolumeSpecName: "utilities") pod "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" (UID: "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.289972 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8nh2\" (UniqueName: \"kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2\") pod \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\" (UID: \"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2\") " Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.290597 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.310410 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2" (OuterVolumeSpecName: "kube-api-access-t8nh2") pod "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" (UID: "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2"). InnerVolumeSpecName "kube-api-access-t8nh2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.319466 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" (UID: "29b869ed-f7d2-4b6d-851f-b5e4d95c08c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.391685 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.391722 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8nh2\" (UniqueName: \"kubernetes.io/projected/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2-kube-api-access-t8nh2\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.757633 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerStarted","Data":"5654bb14f4d5a9034b4ccbc02f64eec58c7445b9d260baa1d8fb26fe1841dc19"} Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.761667 5130 generic.go:358] "Generic (PLEG): container finished" podID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerID="e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937" exitCode=0 Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.761811 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h46w2" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.762104 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerDied","Data":"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937"} Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.762259 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h46w2" event={"ID":"29b869ed-f7d2-4b6d-851f-b5e4d95c08c2","Type":"ContainerDied","Data":"d1d4d05b754478a44a46557114554ff8d33446539ed1f76bfa1f327c87d9adee"} Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.762362 5130 scope.go:117] "RemoveContainer" containerID="e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.790038 5130 scope.go:117] "RemoveContainer" containerID="071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.796283 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5p4x" podStartSLOduration=5.250396112 podStartE2EDuration="5.796252797s" podCreationTimestamp="2025-12-12 16:37:22 +0000 UTC" firstStartedPulling="2025-12-12 16:37:24.73173683 +0000 UTC m=+1344.629411702" lastFinishedPulling="2025-12-12 16:37:25.277593555 +0000 UTC m=+1345.175268387" observedRunningTime="2025-12-12 16:37:27.785034465 +0000 UTC m=+1347.682709347" watchObservedRunningTime="2025-12-12 16:37:27.796252797 +0000 UTC m=+1347.693927619" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.808567 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.814733 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h46w2"] Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.815867 5130 scope.go:117] "RemoveContainer" containerID="134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.843173 5130 scope.go:117] "RemoveContainer" containerID="e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937" Dec 12 16:37:27 crc kubenswrapper[5130]: E1212 16:37:27.843965 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937\": container with ID starting with e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937 not found: ID does not exist" containerID="e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.844119 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937"} err="failed to get container status \"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937\": rpc error: code = NotFound desc = could not find container \"e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937\": container with ID starting with e1adbb04117031fed41b11ac7bbd94a1ea1eb00f366debd53bf05624f33cc937 not found: ID does not exist" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.844222 5130 scope.go:117] "RemoveContainer" containerID="071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0" Dec 12 16:37:27 crc kubenswrapper[5130]: E1212 16:37:27.844760 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0\": container with ID starting with 071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0 not found: ID does not exist" containerID="071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.844882 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0"} err="failed to get container status \"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0\": rpc error: code = NotFound desc = could not find container \"071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0\": container with ID starting with 071c98b37e40c4098a2b1e0a17932a663089012e7367b9cd40540fd0466530c0 not found: ID does not exist" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.844987 5130 scope.go:117] "RemoveContainer" containerID="134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278" Dec 12 16:37:27 crc kubenswrapper[5130]: E1212 16:37:27.845494 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278\": container with ID starting with 134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278 not found: ID does not exist" containerID="134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278" Dec 12 16:37:27 crc kubenswrapper[5130]: I1212 16:37:27.845545 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278"} err="failed to get container status \"134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278\": rpc error: code = NotFound desc = could not find container \"134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278\": container with ID starting with 134a93eda017aa592e7d8690789a96a081c982e9aece7ec215571f87aeaba278 not found: ID does not exist" Dec 12 16:37:28 crc kubenswrapper[5130]: E1212 16:37:28.372462 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:37:28 crc kubenswrapper[5130]: I1212 16:37:28.386630 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" path="/var/lib/kubelet/pods/29b869ed-f7d2-4b6d-851f-b5e4d95c08c2/volumes" Dec 12 16:37:33 crc kubenswrapper[5130]: I1212 16:37:33.429924 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:33 crc kubenswrapper[5130]: I1212 16:37:33.430812 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:33 crc kubenswrapper[5130]: I1212 16:37:33.480261 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:33 crc kubenswrapper[5130]: I1212 16:37:33.858993 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:33 crc kubenswrapper[5130]: I1212 16:37:33.966356 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:35 crc kubenswrapper[5130]: I1212 16:37:35.823481 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k5p4x" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="registry-server" containerID="cri-o://5654bb14f4d5a9034b4ccbc02f64eec58c7445b9d260baa1d8fb26fe1841dc19" gracePeriod=2 Dec 12 16:37:37 crc kubenswrapper[5130]: E1212 16:37:37.372059 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:37:39 crc kubenswrapper[5130]: E1212 16:37:39.370740 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:37:39 crc kubenswrapper[5130]: I1212 16:37:39.855416 5130 generic.go:358] "Generic (PLEG): container finished" podID="6e1befc6-b980-4afa-ab59-48293a764532" containerID="5654bb14f4d5a9034b4ccbc02f64eec58c7445b9d260baa1d8fb26fe1841dc19" exitCode=0 Dec 12 16:37:39 crc kubenswrapper[5130]: I1212 16:37:39.855512 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerDied","Data":"5654bb14f4d5a9034b4ccbc02f64eec58c7445b9d260baa1d8fb26fe1841dc19"} Dec 12 16:37:39 crc kubenswrapper[5130]: I1212 16:37:39.920731 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.021550 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities\") pod \"6e1befc6-b980-4afa-ab59-48293a764532\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.021744 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content\") pod \"6e1befc6-b980-4afa-ab59-48293a764532\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.021789 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w\") pod \"6e1befc6-b980-4afa-ab59-48293a764532\" (UID: \"6e1befc6-b980-4afa-ab59-48293a764532\") " Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.022880 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities" (OuterVolumeSpecName: "utilities") pod "6e1befc6-b980-4afa-ab59-48293a764532" (UID: "6e1befc6-b980-4afa-ab59-48293a764532"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.027866 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w" (OuterVolumeSpecName: "kube-api-access-4g74w") pod "6e1befc6-b980-4afa-ab59-48293a764532" (UID: "6e1befc6-b980-4afa-ab59-48293a764532"). InnerVolumeSpecName "kube-api-access-4g74w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.112536 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e1befc6-b980-4afa-ab59-48293a764532" (UID: "6e1befc6-b980-4afa-ab59-48293a764532"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.123470 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.123794 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/6e1befc6-b980-4afa-ab59-48293a764532-kube-api-access-4g74w\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.123926 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e1befc6-b980-4afa-ab59-48293a764532-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.863139 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5p4x" event={"ID":"6e1befc6-b980-4afa-ab59-48293a764532","Type":"ContainerDied","Data":"970df140b69f5a985f67d38cf7877d73e620bd664c49cbc41174b5a595b00dcf"} Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.863222 5130 scope.go:117] "RemoveContainer" containerID="5654bb14f4d5a9034b4ccbc02f64eec58c7445b9d260baa1d8fb26fe1841dc19" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.863228 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5p4x" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.885416 5130 scope.go:117] "RemoveContainer" containerID="76a5062737bc9e902bd064bf949c75f800c4824e926de3e6a5160d08acf53329" Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.885723 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.891657 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k5p4x"] Dec 12 16:37:40 crc kubenswrapper[5130]: I1212 16:37:40.912317 5130 scope.go:117] "RemoveContainer" containerID="99d1de38a68106a981893034823bcd09217f72693d98325f4b9026ff485b6962" Dec 12 16:37:42 crc kubenswrapper[5130]: I1212 16:37:42.379369 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1befc6-b980-4afa-ab59-48293a764532" path="/var/lib/kubelet/pods/6e1befc6-b980-4afa-ab59-48293a764532/volumes" Dec 12 16:37:48 crc kubenswrapper[5130]: E1212 16:37:48.369991 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:37:54 crc kubenswrapper[5130]: E1212 16:37:54.370978 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:38:00 crc kubenswrapper[5130]: E1212 16:38:00.376505 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:38:06 crc kubenswrapper[5130]: E1212 16:38:06.370331 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.917663 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.921919 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="extract-content" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.921963 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="extract-content" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.921976 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="registry-server" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.921982 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="registry-server" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922008 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="extract-utilities" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922015 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="extract-utilities" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922026 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="extract-utilities" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922032 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="extract-utilities" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922043 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="registry-server" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922050 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="registry-server" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922068 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="extract-content" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922074 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="extract-content" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922203 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e1befc6-b980-4afa-ab59-48293a764532" containerName="registry-server" Dec 12 16:38:09 crc kubenswrapper[5130]: I1212 16:38:09.922215 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="29b869ed-f7d2-4b6d-851f-b5e4d95c08c2" containerName="registry-server" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.421429 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.430757 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.522654 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.522819 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.522898 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.624432 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.624842 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.624897 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.625288 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.625321 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.649784 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l\") pod \"community-operators-4sccg\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.741482 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:10 crc kubenswrapper[5130]: I1212 16:38:10.966766 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:11 crc kubenswrapper[5130]: I1212 16:38:11.084452 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerStarted","Data":"c4596c3be96f08edc7a76fc72bd094fbf2673d1da01776e82e1e95eb549bf0ce"} Dec 12 16:38:12 crc kubenswrapper[5130]: I1212 16:38:12.093625 5130 generic.go:358] "Generic (PLEG): container finished" podID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerID="91467f0c589d4df111bfbed47d692f1901c56447a0273f9365584e9fe0cd0a7f" exitCode=0 Dec 12 16:38:12 crc kubenswrapper[5130]: I1212 16:38:12.093710 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerDied","Data":"91467f0c589d4df111bfbed47d692f1901c56447a0273f9365584e9fe0cd0a7f"} Dec 12 16:38:14 crc kubenswrapper[5130]: I1212 16:38:14.108429 5130 generic.go:358] "Generic (PLEG): container finished" podID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerID="07c772f238fbf95175412956c009858e6df6daf8b8dcc3d3f80a51adfc767ada" exitCode=0 Dec 12 16:38:14 crc kubenswrapper[5130]: I1212 16:38:14.108509 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerDied","Data":"07c772f238fbf95175412956c009858e6df6daf8b8dcc3d3f80a51adfc767ada"} Dec 12 16:38:14 crc kubenswrapper[5130]: E1212 16:38:14.370822 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:38:15 crc kubenswrapper[5130]: I1212 16:38:15.122295 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerStarted","Data":"c9b5ad76f3816d65ea22d74f25d51aff2d92e8651e58be8c05c0fad7fdf1f1cc"} Dec 12 16:38:15 crc kubenswrapper[5130]: I1212 16:38:15.147775 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4sccg" podStartSLOduration=5.213177336 podStartE2EDuration="6.147745248s" podCreationTimestamp="2025-12-12 16:38:09 +0000 UTC" firstStartedPulling="2025-12-12 16:38:12.094838802 +0000 UTC m=+1391.992513634" lastFinishedPulling="2025-12-12 16:38:13.029406714 +0000 UTC m=+1392.927081546" observedRunningTime="2025-12-12 16:38:15.140371372 +0000 UTC m=+1395.038046224" watchObservedRunningTime="2025-12-12 16:38:15.147745248 +0000 UTC m=+1395.045420080" Dec 12 16:38:18 crc kubenswrapper[5130]: E1212 16:38:18.371043 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:38:20 crc kubenswrapper[5130]: I1212 16:38:20.742741 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:20 crc kubenswrapper[5130]: I1212 16:38:20.742816 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:20 crc kubenswrapper[5130]: I1212 16:38:20.780375 5130 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:21 crc kubenswrapper[5130]: I1212 16:38:21.222703 5130 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:21 crc kubenswrapper[5130]: I1212 16:38:21.281038 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:23 crc kubenswrapper[5130]: I1212 16:38:23.190981 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4sccg" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="registry-server" containerID="cri-o://c9b5ad76f3816d65ea22d74f25d51aff2d92e8651e58be8c05c0fad7fdf1f1cc" gracePeriod=2 Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.228768 5130 generic.go:358] "Generic (PLEG): container finished" podID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerID="c9b5ad76f3816d65ea22d74f25d51aff2d92e8651e58be8c05c0fad7fdf1f1cc" exitCode=0 Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.228985 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerDied","Data":"c9b5ad76f3816d65ea22d74f25d51aff2d92e8651e58be8c05c0fad7fdf1f1cc"} Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.346900 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.493411 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities\") pod \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.493657 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l\") pod \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.493742 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content\") pod \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\" (UID: \"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de\") " Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.494707 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities" (OuterVolumeSpecName: "utilities") pod "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" (UID: "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.502392 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l" (OuterVolumeSpecName: "kube-api-access-rsl5l") pod "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" (UID: "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de"). InnerVolumeSpecName "kube-api-access-rsl5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.543235 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" (UID: "4dcc42e3-b653-498b-8ca0-bcbf16d0b1de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.595847 5130 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.595887 5130 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-utilities\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:27 crc kubenswrapper[5130]: I1212 16:38:27.595902 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsl5l\" (UniqueName: \"kubernetes.io/projected/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de-kube-api-access-rsl5l\") on node \"crc\" DevicePath \"\"" Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.240137 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4sccg" event={"ID":"4dcc42e3-b653-498b-8ca0-bcbf16d0b1de","Type":"ContainerDied","Data":"c4596c3be96f08edc7a76fc72bd094fbf2673d1da01776e82e1e95eb549bf0ce"} Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.240243 5130 scope.go:117] "RemoveContainer" containerID="c9b5ad76f3816d65ea22d74f25d51aff2d92e8651e58be8c05c0fad7fdf1f1cc" Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.240274 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4sccg" Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.271148 5130 scope.go:117] "RemoveContainer" containerID="07c772f238fbf95175412956c009858e6df6daf8b8dcc3d3f80a51adfc767ada" Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.295799 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.303744 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4sccg"] Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.314319 5130 scope.go:117] "RemoveContainer" containerID="91467f0c589d4df111bfbed47d692f1901c56447a0273f9365584e9fe0cd0a7f" Dec 12 16:38:28 crc kubenswrapper[5130]: I1212 16:38:28.381977 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" path="/var/lib/kubelet/pods/4dcc42e3-b653-498b-8ca0-bcbf16d0b1de/volumes" Dec 12 16:38:29 crc kubenswrapper[5130]: E1212 16:38:29.370878 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:38:33 crc kubenswrapper[5130]: E1212 16:38:33.370219 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:38:44 crc kubenswrapper[5130]: E1212 16:38:44.371045 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:38:46 crc kubenswrapper[5130]: E1212 16:38:46.371070 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:38:55 crc kubenswrapper[5130]: E1212 16:38:55.371346 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:38:59 crc kubenswrapper[5130]: E1212 16:38:59.370992 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:39:09 crc kubenswrapper[5130]: E1212 16:39:09.371150 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:39:13 crc kubenswrapper[5130]: E1212 16:39:13.370986 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:39:22 crc kubenswrapper[5130]: E1212 16:39:22.369991 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:39:22 crc kubenswrapper[5130]: I1212 16:39:22.730730 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:39:22 crc kubenswrapper[5130]: I1212 16:39:22.730897 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:39:27 crc kubenswrapper[5130]: E1212 16:39:27.370797 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:39:35 crc kubenswrapper[5130]: E1212 16:39:35.370537 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:39:40 crc kubenswrapper[5130]: E1212 16:39:40.375560 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:39:50 crc kubenswrapper[5130]: E1212 16:39:50.376405 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:39:51 crc kubenswrapper[5130]: E1212 16:39:51.371932 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:39:52 crc kubenswrapper[5130]: I1212 16:39:52.729788 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:39:52 crc kubenswrapper[5130]: I1212 16:39:52.729864 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:40:00 crc kubenswrapper[5130]: I1212 16:40:00.931512 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:40:00 crc kubenswrapper[5130]: I1212 16:40:00.935870 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rzhgf_6625166c-6688-498a-81c5-89ec476edef2/kube-multus/0.log" Dec 12 16:40:00 crc kubenswrapper[5130]: I1212 16:40:00.945141 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:40:00 crc kubenswrapper[5130]: I1212 16:40:00.946770 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 12 16:40:01 crc kubenswrapper[5130]: E1212 16:40:01.370769 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:40:02 crc kubenswrapper[5130]: E1212 16:40:02.455492 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:40:02 crc kubenswrapper[5130]: E1212 16:40:02.456404 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4pzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-6bs58_service-telemetry(6510d065-e486-4274-a8ca-4c2cdb8dd1ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:40:02 crc kubenswrapper[5130]: E1212 16:40:02.457705 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:40:13 crc kubenswrapper[5130]: E1212 16:40:13.370420 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:40:14 crc kubenswrapper[5130]: E1212 16:40:14.371525 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:40:22 crc kubenswrapper[5130]: I1212 16:40:22.730640 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:40:22 crc kubenswrapper[5130]: I1212 16:40:22.731723 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:40:22 crc kubenswrapper[5130]: I1212 16:40:22.731814 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:40:22 crc kubenswrapper[5130]: I1212 16:40:22.732726 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:40:22 crc kubenswrapper[5130]: I1212 16:40:22.732839 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d" gracePeriod=600 Dec 12 16:40:23 crc kubenswrapper[5130]: I1212 16:40:23.128754 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d" exitCode=0 Dec 12 16:40:23 crc kubenswrapper[5130]: I1212 16:40:23.128871 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d"} Dec 12 16:40:23 crc kubenswrapper[5130]: I1212 16:40:23.129483 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerStarted","Data":"9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836"} Dec 12 16:40:23 crc kubenswrapper[5130]: I1212 16:40:23.129518 5130 scope.go:117] "RemoveContainer" containerID="6762a0533ce6be3435ef031a367613e279a52623a855d77f3efd56da6bafa5a8" Dec 12 16:40:27 crc kubenswrapper[5130]: E1212 16:40:27.370489 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:40:29 crc kubenswrapper[5130]: E1212 16:40:29.462926 5130 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 12 16:40:29 crc kubenswrapper[5130]: E1212 16:40:29.464086 5130 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4zc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-cdpts_service-telemetry(eeed1a9b-f386-4d11-b730-03bcb44f9a55): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 12 16:40:29 crc kubenswrapper[5130]: E1212 16:40:29.465424 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.238243 5130 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2sjxj/must-gather-v4h5l"] Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240082 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="registry-server" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240102 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="registry-server" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240121 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="extract-content" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240129 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="extract-content" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240156 5130 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="extract-utilities" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240164 5130 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="extract-utilities" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.240332 5130 memory_manager.go:356] "RemoveStaleState removing state" podUID="4dcc42e3-b653-498b-8ca0-bcbf16d0b1de" containerName="registry-server" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.255118 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2sjxj/must-gather-v4h5l"] Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.255328 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.258113 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2sjxj\"/\"openshift-service-ca.crt\"" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.259086 5130 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2sjxj\"/\"kube-root-ca.crt\"" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.276555 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxp5\" (UniqueName: \"kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.276701 5130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.378504 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.378587 5130 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxp5\" (UniqueName: \"kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.379349 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.410147 5130 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxp5\" (UniqueName: \"kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5\") pod \"must-gather-v4h5l\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:38 crc kubenswrapper[5130]: I1212 16:40:38.582903 5130 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:40:39 crc kubenswrapper[5130]: I1212 16:40:39.035833 5130 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2sjxj/must-gather-v4h5l"] Dec 12 16:40:39 crc kubenswrapper[5130]: I1212 16:40:39.255083 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" event={"ID":"e591e9a3-fc25-45d7-bb1b-84d59c92c39d","Type":"ContainerStarted","Data":"e61ff18407c4da69daabe52c3f069448d5b6b9470c5e8e023f00d37e559a8d95"} Dec 12 16:40:39 crc kubenswrapper[5130]: E1212 16:40:39.370265 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:40:43 crc kubenswrapper[5130]: E1212 16:40:43.370588 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:40:46 crc kubenswrapper[5130]: I1212 16:40:46.307292 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" event={"ID":"e591e9a3-fc25-45d7-bb1b-84d59c92c39d","Type":"ContainerStarted","Data":"de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332"} Dec 12 16:40:46 crc kubenswrapper[5130]: I1212 16:40:46.307714 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" event={"ID":"e591e9a3-fc25-45d7-bb1b-84d59c92c39d","Type":"ContainerStarted","Data":"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11"} Dec 12 16:40:46 crc kubenswrapper[5130]: I1212 16:40:46.324816 5130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" podStartSLOduration=2.134226574 podStartE2EDuration="8.324798834s" podCreationTimestamp="2025-12-12 16:40:38 +0000 UTC" firstStartedPulling="2025-12-12 16:40:39.038544164 +0000 UTC m=+1538.936218996" lastFinishedPulling="2025-12-12 16:40:45.229116434 +0000 UTC m=+1545.126791256" observedRunningTime="2025-12-12 16:40:46.322152418 +0000 UTC m=+1546.219827250" watchObservedRunningTime="2025-12-12 16:40:46.324798834 +0000 UTC m=+1546.222473666" Dec 12 16:40:48 crc kubenswrapper[5130]: I1212 16:40:48.228800 5130 ???:1] "http: TLS handshake error from 192.168.126.11:33314: no serving certificate available for the kubelet" Dec 12 16:40:52 crc kubenswrapper[5130]: E1212 16:40:52.370730 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:40:54 crc kubenswrapper[5130]: E1212 16:40:54.369904 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:41:05 crc kubenswrapper[5130]: E1212 16:41:05.370459 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:41:06 crc kubenswrapper[5130]: E1212 16:41:06.380207 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:41:18 crc kubenswrapper[5130]: E1212 16:41:18.370573 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:41:18 crc kubenswrapper[5130]: E1212 16:41:18.370664 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:41:25 crc kubenswrapper[5130]: I1212 16:41:25.071229 5130 ???:1] "http: TLS handshake error from 192.168.126.11:58076: no serving certificate available for the kubelet" Dec 12 16:41:25 crc kubenswrapper[5130]: I1212 16:41:25.184214 5130 ???:1] "http: TLS handshake error from 192.168.126.11:58092: no serving certificate available for the kubelet" Dec 12 16:41:25 crc kubenswrapper[5130]: I1212 16:41:25.189368 5130 ???:1] "http: TLS handshake error from 192.168.126.11:58094: no serving certificate available for the kubelet" Dec 12 16:41:32 crc kubenswrapper[5130]: E1212 16:41:32.371103 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:41:33 crc kubenswrapper[5130]: E1212 16:41:33.370879 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:41:37 crc kubenswrapper[5130]: I1212 16:41:37.046777 5130 ???:1] "http: TLS handshake error from 192.168.126.11:46584: no serving certificate available for the kubelet" Dec 12 16:41:37 crc kubenswrapper[5130]: I1212 16:41:37.236737 5130 ???:1] "http: TLS handshake error from 192.168.126.11:46586: no serving certificate available for the kubelet" Dec 12 16:41:37 crc kubenswrapper[5130]: I1212 16:41:37.249372 5130 ???:1] "http: TLS handshake error from 192.168.126.11:46602: no serving certificate available for the kubelet" Dec 12 16:41:43 crc kubenswrapper[5130]: I1212 16:41:43.413457 5130 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 12 16:41:43 crc kubenswrapper[5130]: E1212 16:41:43.413937 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:41:46 crc kubenswrapper[5130]: E1212 16:41:46.383368 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:41:52 crc kubenswrapper[5130]: I1212 16:41:52.902745 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38342: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.105220 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38358: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.123721 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38374: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.144564 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38376: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.356564 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38384: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.363141 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38400: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.364471 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38412: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.564035 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38414: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.702634 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38426: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.703116 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38438: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.728066 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38448: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.904384 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38450: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.929795 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38466: no serving certificate available for the kubelet" Dec 12 16:41:53 crc kubenswrapper[5130]: I1212 16:41:53.957420 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38482: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.093928 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38498: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.276895 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38508: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.277651 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38510: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.309511 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38518: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.475509 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38524: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.478878 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38538: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.509499 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38550: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.690126 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38564: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.868528 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38580: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.915868 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38582: no serving certificate available for the kubelet" Dec 12 16:41:54 crc kubenswrapper[5130]: I1212 16:41:54.919455 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38588: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.098440 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38602: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.118046 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38608: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.126274 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38622: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.280321 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38626: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: E1212 16:41:55.369872 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.454140 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38642: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.475968 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38648: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.513434 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38662: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.653817 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38664: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.660284 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38668: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.662979 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38678: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.713334 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38682: no serving certificate available for the kubelet" Dec 12 16:41:55 crc kubenswrapper[5130]: I1212 16:41:55.857096 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38692: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.013072 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38706: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.031401 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38714: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.031745 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38730: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.180797 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38738: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.183862 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38740: no serving certificate available for the kubelet" Dec 12 16:41:56 crc kubenswrapper[5130]: I1212 16:41:56.186111 5130 ???:1] "http: TLS handshake error from 192.168.126.11:38744: no serving certificate available for the kubelet" Dec 12 16:41:59 crc kubenswrapper[5130]: E1212 16:41:59.370521 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:42:06 crc kubenswrapper[5130]: E1212 16:42:06.371163 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:42:08 crc kubenswrapper[5130]: I1212 16:42:08.343830 5130 ???:1] "http: TLS handshake error from 192.168.126.11:50644: no serving certificate available for the kubelet" Dec 12 16:42:08 crc kubenswrapper[5130]: I1212 16:42:08.520545 5130 ???:1] "http: TLS handshake error from 192.168.126.11:50658: no serving certificate available for the kubelet" Dec 12 16:42:08 crc kubenswrapper[5130]: I1212 16:42:08.562828 5130 ???:1] "http: TLS handshake error from 192.168.126.11:50662: no serving certificate available for the kubelet" Dec 12 16:42:08 crc kubenswrapper[5130]: I1212 16:42:08.707600 5130 ???:1] "http: TLS handshake error from 192.168.126.11:50674: no serving certificate available for the kubelet" Dec 12 16:42:08 crc kubenswrapper[5130]: I1212 16:42:08.776620 5130 ???:1] "http: TLS handshake error from 192.168.126.11:50682: no serving certificate available for the kubelet" Dec 12 16:42:10 crc kubenswrapper[5130]: E1212 16:42:10.381896 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:42:17 crc kubenswrapper[5130]: E1212 16:42:17.370277 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:42:24 crc kubenswrapper[5130]: E1212 16:42:24.370234 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:42:27 crc kubenswrapper[5130]: I1212 16:42:27.482869 5130 ???:1] "http: TLS handshake error from 192.168.126.11:47346: no serving certificate available for the kubelet" Dec 12 16:42:31 crc kubenswrapper[5130]: E1212 16:42:31.370712 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:42:37 crc kubenswrapper[5130]: E1212 16:42:37.371552 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:42:43 crc kubenswrapper[5130]: E1212 16:42:43.370604 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:42:48 crc kubenswrapper[5130]: E1212 16:42:48.371558 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:42:49 crc kubenswrapper[5130]: I1212 16:42:49.192157 5130 generic.go:358] "Generic (PLEG): container finished" podID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" containerID="4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11" exitCode=0 Dec 12 16:42:49 crc kubenswrapper[5130]: I1212 16:42:49.192274 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" event={"ID":"e591e9a3-fc25-45d7-bb1b-84d59c92c39d","Type":"ContainerDied","Data":"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11"} Dec 12 16:42:49 crc kubenswrapper[5130]: I1212 16:42:49.193478 5130 scope.go:117] "RemoveContainer" containerID="4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11" Dec 12 16:42:52 crc kubenswrapper[5130]: I1212 16:42:52.729925 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:42:52 crc kubenswrapper[5130]: I1212 16:42:52.730655 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:42:57 crc kubenswrapper[5130]: E1212 16:42:57.369694 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.247267 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49228: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.395883 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49240: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.410173 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49256: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.433156 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49260: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.446875 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49266: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.463106 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49274: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.474314 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49286: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.486032 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49294: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.496669 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49308: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.692003 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49320: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.705741 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49322: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.732994 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49336: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.745929 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49344: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.761411 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49354: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.774661 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49370: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.788597 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49382: no serving certificate available for the kubelet" Dec 12 16:42:58 crc kubenswrapper[5130]: I1212 16:42:58.800856 5130 ???:1] "http: TLS handshake error from 192.168.126.11:49398: no serving certificate available for the kubelet" Dec 12 16:43:00 crc kubenswrapper[5130]: E1212 16:43:00.382305 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:43:03 crc kubenswrapper[5130]: I1212 16:43:03.845652 5130 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2sjxj/must-gather-v4h5l"] Dec 12 16:43:03 crc kubenswrapper[5130]: I1212 16:43:03.847810 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" podUID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" containerName="copy" containerID="cri-o://de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332" gracePeriod=2 Dec 12 16:43:03 crc kubenswrapper[5130]: I1212 16:43:03.849032 5130 status_manager.go:895] "Failed to get status for pod" podUID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" err="pods \"must-gather-v4h5l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2sjxj\": no relationship found between node 'crc' and this object" Dec 12 16:43:03 crc kubenswrapper[5130]: I1212 16:43:03.850713 5130 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2sjxj/must-gather-v4h5l"] Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.241898 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2sjxj_must-gather-v4h5l_e591e9a3-fc25-45d7-bb1b-84d59c92c39d/copy/0.log" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.242728 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.244270 5130 status_manager.go:895] "Failed to get status for pod" podUID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" err="pods \"must-gather-v4h5l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2sjxj\": no relationship found between node 'crc' and this object" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.326987 5130 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2sjxj_must-gather-v4h5l_e591e9a3-fc25-45d7-bb1b-84d59c92c39d/copy/0.log" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.327454 5130 generic.go:358] "Generic (PLEG): container finished" podID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" containerID="de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332" exitCode=143 Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.327537 5130 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.327641 5130 scope.go:117] "RemoveContainer" containerID="de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.328990 5130 status_manager.go:895] "Failed to get status for pod" podUID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" pod="openshift-must-gather-2sjxj/must-gather-v4h5l" err="pods \"must-gather-v4h5l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-2sjxj\": no relationship found between node 'crc' and this object" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.354564 5130 scope.go:117] "RemoveContainer" containerID="4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.374203 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output\") pod \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.374384 5130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zxp5\" (UniqueName: \"kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5\") pod \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\" (UID: \"e591e9a3-fc25-45d7-bb1b-84d59c92c39d\") " Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.384706 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5" (OuterVolumeSpecName: "kube-api-access-5zxp5") pod "e591e9a3-fc25-45d7-bb1b-84d59c92c39d" (UID: "e591e9a3-fc25-45d7-bb1b-84d59c92c39d"). InnerVolumeSpecName "kube-api-access-5zxp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.417352 5130 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e591e9a3-fc25-45d7-bb1b-84d59c92c39d" (UID: "e591e9a3-fc25-45d7-bb1b-84d59c92c39d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.437899 5130 scope.go:117] "RemoveContainer" containerID="de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332" Dec 12 16:43:04 crc kubenswrapper[5130]: E1212 16:43:04.438561 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332\": container with ID starting with de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332 not found: ID does not exist" containerID="de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.438639 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332"} err="failed to get container status \"de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332\": rpc error: code = NotFound desc = could not find container \"de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332\": container with ID starting with de01f12591f8c0dc318bd567f0eec2fd47472a1f25a19da700d038d7d3f58332 not found: ID does not exist" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.438674 5130 scope.go:117] "RemoveContainer" containerID="4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11" Dec 12 16:43:04 crc kubenswrapper[5130]: E1212 16:43:04.439016 5130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11\": container with ID starting with 4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11 not found: ID does not exist" containerID="4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.439063 5130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11"} err="failed to get container status \"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11\": rpc error: code = NotFound desc = could not find container \"4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11\": container with ID starting with 4e2ec79febc3619a5028891e3c0d265f40d9d07fb2d6b6ef7efefef5e8d86e11 not found: ID does not exist" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.479409 5130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zxp5\" (UniqueName: \"kubernetes.io/projected/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-kube-api-access-5zxp5\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:04 crc kubenswrapper[5130]: I1212 16:43:04.479453 5130 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e591e9a3-fc25-45d7-bb1b-84d59c92c39d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 12 16:43:06 crc kubenswrapper[5130]: I1212 16:43:06.376486 5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e591e9a3-fc25-45d7-bb1b-84d59c92c39d" path="/var/lib/kubelet/pods/e591e9a3-fc25-45d7-bb1b-84d59c92c39d/volumes" Dec 12 16:43:10 crc kubenswrapper[5130]: E1212 16:43:10.375338 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:43:14 crc kubenswrapper[5130]: E1212 16:43:14.370268 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:43:22 crc kubenswrapper[5130]: I1212 16:43:22.731066 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:43:22 crc kubenswrapper[5130]: I1212 16:43:22.732854 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:43:23 crc kubenswrapper[5130]: E1212 16:43:23.370456 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:43:28 crc kubenswrapper[5130]: E1212 16:43:28.371802 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:43:36 crc kubenswrapper[5130]: E1212 16:43:36.370680 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:43:42 crc kubenswrapper[5130]: E1212 16:43:42.371117 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:43:48 crc kubenswrapper[5130]: E1212 16:43:48.370527 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae" Dec 12 16:43:52 crc kubenswrapper[5130]: I1212 16:43:52.730773 5130 patch_prober.go:28] interesting pod/machine-config-daemon-qwg8p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 12 16:43:52 crc kubenswrapper[5130]: I1212 16:43:52.731362 5130 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 12 16:43:52 crc kubenswrapper[5130]: I1212 16:43:52.731415 5130 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" Dec 12 16:43:52 crc kubenswrapper[5130]: I1212 16:43:52.732118 5130 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836"} pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 12 16:43:52 crc kubenswrapper[5130]: I1212 16:43:52.732173 5130 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerName="machine-config-daemon" containerID="cri-o://9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836" gracePeriod=600 Dec 12 16:43:52 crc kubenswrapper[5130]: E1212 16:43:52.863701 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwg8p_openshift-machine-config-operator(5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" Dec 12 16:43:53 crc kubenswrapper[5130]: I1212 16:43:53.711443 5130 generic.go:358] "Generic (PLEG): container finished" podID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" containerID="9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836" exitCode=0 Dec 12 16:43:53 crc kubenswrapper[5130]: I1212 16:43:53.711793 5130 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" event={"ID":"5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e","Type":"ContainerDied","Data":"9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836"} Dec 12 16:43:53 crc kubenswrapper[5130]: I1212 16:43:53.711832 5130 scope.go:117] "RemoveContainer" containerID="7fe9f788d3114cddc6804fb2d06b9d6fe79d4f751418b6cc896b61cce5f1c95d" Dec 12 16:43:53 crc kubenswrapper[5130]: I1212 16:43:53.712408 5130 scope.go:117] "RemoveContainer" containerID="9de92a04945e023ccc31a98009262f502bc39e27b128d908dccf392e6673f836" Dec 12 16:43:53 crc kubenswrapper[5130]: E1212 16:43:53.712666 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qwg8p_openshift-machine-config-operator(5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e)\"" pod="openshift-machine-config-operator/machine-config-daemon-qwg8p" podUID="5eed03e3-b46f-4ae0-a063-d9a0d64c3a7e" Dec 12 16:43:54 crc kubenswrapper[5130]: E1212 16:43:54.370637 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-cdpts" podUID="eeed1a9b-f386-4d11-b730-03bcb44f9a55" Dec 12 16:44:00 crc kubenswrapper[5130]: E1212 16:44:00.375368 5130 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-6bs58" podUID="6510d065-e486-4274-a8ca-4c2cdb8dd1ae"