Jan 26 00:09:28 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 26 00:09:29 crc kubenswrapper[5121]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 00:09:29 crc kubenswrapper[5121]: I0126 00:09:29.795726 5121 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028127 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028183 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028189 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028195 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028203 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028210 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028216 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028220 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028223 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028228 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028233 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028237 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028241 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028245 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028250 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028255 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028259 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028263 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028267 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028272 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028277 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028281 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028288 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028292 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028296 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028301 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028306 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028310 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028313 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028316 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028320 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028323 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028326 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028329 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028332 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028336 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028339 5121 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028342 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028346 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028349 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028352 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028355 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028359 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028362 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028365 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028371 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028377 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028380 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028384 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028389 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028392 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028396 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028400 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028404 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028407 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028413 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028417 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028420 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028425 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028430 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028434 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028438 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028441 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028445 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028451 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028456 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028460 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028464 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028467 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028471 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028475 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028479 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028482 5121 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028486 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028490 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028494 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028498 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028502 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028506 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028510 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028513 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028517 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028520 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028524 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028527 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.028530 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029201 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029218 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029223 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029228 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029232 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029236 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029239 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029243 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029247 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029251 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029254 5121 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029257 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029260 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029265 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029268 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029271 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029275 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029278 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029282 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029286 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029289 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029293 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029296 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029300 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029304 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029308 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029311 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029315 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029318 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029321 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029325 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029328 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029331 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029335 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029363 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029367 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029371 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029375 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029378 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029384 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029389 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029393 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029396 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029400 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029404 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029407 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029410 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029414 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029417 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029421 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029424 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029427 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029431 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029434 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029438 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029443 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029448 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029452 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029455 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029459 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029462 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029466 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029469 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029472 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029478 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029481 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029485 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029488 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029492 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029495 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029499 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029502 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029505 5121 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029509 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029512 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029516 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029519 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029523 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029526 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029529 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029532 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029536 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029539 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029542 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029546 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.029550 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029825 5121 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029850 5121 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029869 5121 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029875 5121 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029881 5121 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029886 5121 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029892 5121 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029898 5121 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029903 5121 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029907 5121 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029912 5121 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029916 5121 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029920 5121 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029924 5121 flags.go:64] FLAG: --cgroup-root="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029930 5121 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029934 5121 flags.go:64] FLAG: --client-ca-file="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029938 5121 flags.go:64] FLAG: --cloud-config="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029941 5121 flags.go:64] FLAG: --cloud-provider="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029945 5121 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029951 5121 flags.go:64] FLAG: --cluster-domain="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029954 5121 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029959 5121 flags.go:64] FLAG: --config-dir="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029962 5121 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029967 5121 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029973 5121 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029977 5121 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029982 5121 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029986 5121 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029990 5121 flags.go:64] FLAG: --contention-profiling="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029994 5121 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.029998 5121 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030002 5121 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030005 5121 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030014 5121 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030022 5121 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030026 5121 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030030 5121 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030034 5121 flags.go:64] FLAG: --enable-server="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030038 5121 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030044 5121 flags.go:64] FLAG: --event-burst="100" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030048 5121 flags.go:64] FLAG: --event-qps="50" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030051 5121 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030056 5121 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030060 5121 flags.go:64] FLAG: --eviction-hard="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030065 5121 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030069 5121 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030073 5121 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030077 5121 flags.go:64] FLAG: --eviction-soft="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030081 5121 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030085 5121 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030089 5121 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030092 5121 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030096 5121 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030100 5121 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030104 5121 flags.go:64] FLAG: --feature-gates="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030110 5121 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030114 5121 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030118 5121 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030122 5121 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030126 5121 flags.go:64] FLAG: --healthz-port="10248" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030130 5121 flags.go:64] FLAG: --help="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030134 5121 flags.go:64] FLAG: --hostname-override="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030137 5121 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030141 5121 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030146 5121 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030152 5121 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030158 5121 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030162 5121 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030166 5121 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030170 5121 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030174 5121 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030178 5121 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030182 5121 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030186 5121 flags.go:64] FLAG: --kube-reserved="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030190 5121 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030194 5121 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030198 5121 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030202 5121 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030205 5121 flags.go:64] FLAG: --lock-file="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030209 5121 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030214 5121 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030218 5121 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030225 5121 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030229 5121 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030233 5121 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030237 5121 flags.go:64] FLAG: --logging-format="text" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030241 5121 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030245 5121 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030249 5121 flags.go:64] FLAG: --manifest-url="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030252 5121 flags.go:64] FLAG: --manifest-url-header="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030259 5121 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030263 5121 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030268 5121 flags.go:64] FLAG: --max-pods="110" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030273 5121 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030276 5121 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030280 5121 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030284 5121 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030291 5121 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030297 5121 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030301 5121 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030313 5121 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030317 5121 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030321 5121 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030325 5121 flags.go:64] FLAG: --pod-cidr="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030329 5121 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030338 5121 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030341 5121 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030345 5121 flags.go:64] FLAG: --pods-per-core="0" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030349 5121 flags.go:64] FLAG: --port="10250" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030353 5121 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030357 5121 flags.go:64] FLAG: --provider-id="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030361 5121 flags.go:64] FLAG: --qos-reserved="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030364 5121 flags.go:64] FLAG: --read-only-port="10255" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030369 5121 flags.go:64] FLAG: --register-node="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030373 5121 flags.go:64] FLAG: --register-schedulable="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030377 5121 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030385 5121 flags.go:64] FLAG: --registry-burst="10" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030389 5121 flags.go:64] FLAG: --registry-qps="5" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030393 5121 flags.go:64] FLAG: --reserved-cpus="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030397 5121 flags.go:64] FLAG: --reserved-memory="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030402 5121 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030405 5121 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030409 5121 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030413 5121 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030425 5121 flags.go:64] FLAG: --runonce="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030430 5121 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030433 5121 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030437 5121 flags.go:64] FLAG: --seccomp-default="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030441 5121 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030445 5121 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030454 5121 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030458 5121 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030463 5121 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030466 5121 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030470 5121 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030474 5121 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030478 5121 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030482 5121 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030486 5121 flags.go:64] FLAG: --system-cgroups="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030490 5121 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030496 5121 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030499 5121 flags.go:64] FLAG: --tls-cert-file="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030537 5121 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030543 5121 flags.go:64] FLAG: --tls-min-version="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030547 5121 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030550 5121 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030555 5121 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030559 5121 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030563 5121 flags.go:64] FLAG: --v="2" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030569 5121 flags.go:64] FLAG: --version="false" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030575 5121 flags.go:64] FLAG: --vmodule="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030582 5121 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.030586 5121 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030677 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030682 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030687 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030691 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030694 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030698 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030701 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030705 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030709 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030717 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030720 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030724 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030727 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030731 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030734 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030738 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030741 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030745 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030748 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030752 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030767 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030770 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030774 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030778 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030781 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030786 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030791 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030795 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030799 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030804 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030808 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030812 5121 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030816 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030820 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030823 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030826 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030830 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030833 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030837 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030840 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030844 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030852 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030855 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030859 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030862 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030865 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030869 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030872 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030875 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030879 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030882 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030885 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030889 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030892 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030895 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030899 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030902 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030905 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030908 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030913 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030917 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030920 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030923 5121 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030926 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030930 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030933 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030937 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030940 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030943 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030948 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030952 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030956 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030959 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030968 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030971 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030975 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030979 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030983 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030986 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030990 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.030998 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.031004 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.031008 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.031012 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.031016 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.031020 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.031258 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.046137 5121 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.046186 5121 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046269 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046277 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046281 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046285 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046289 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046293 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046296 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046299 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046302 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046306 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046321 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046326 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046330 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046334 5121 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046340 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046346 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046351 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046355 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046360 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046364 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046368 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046372 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046375 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046379 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046382 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046386 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046391 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046394 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046400 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046405 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046409 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046413 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046417 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046421 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046425 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046428 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046432 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046435 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046439 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046443 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046446 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046450 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046453 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046464 5121 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046468 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046472 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046475 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046480 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046484 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046487 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046491 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046532 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046536 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046540 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046543 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046548 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046551 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046555 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046559 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046562 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046565 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046569 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046572 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046577 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046583 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046587 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046591 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046595 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046598 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046602 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046605 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046609 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046614 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046617 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046621 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046624 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046635 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046638 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046642 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046645 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046649 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046652 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046657 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046663 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046668 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046671 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.046678 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046894 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046916 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046920 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046924 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046927 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046931 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046935 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046938 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046942 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046946 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046951 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046955 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046959 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046963 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046967 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046972 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046976 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046981 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046985 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046989 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046993 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.046997 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047010 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047014 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047018 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047022 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047027 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047031 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047036 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047039 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047042 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047046 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047049 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047053 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047056 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047059 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047063 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047066 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047070 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047073 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047076 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047079 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047083 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047086 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047090 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047094 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047101 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047106 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047109 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047113 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047117 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047120 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047123 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047127 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047130 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047140 5121 feature_gate.go:328] unrecognized feature gate: Example Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047144 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047147 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047150 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047154 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047158 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047161 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047165 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047169 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047173 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047176 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047180 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047183 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047186 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047190 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047193 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047197 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047200 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047205 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047209 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047214 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047218 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047222 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047226 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047230 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047234 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047238 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047242 5121 feature_gate.go:328] unrecognized feature gate: Example2 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047246 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047250 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.047254 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.047260 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.047856 5121 server.go:962] "Client rotation is on, will bootstrap in background" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.050639 5121 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.054066 5121 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.054300 5121 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.055101 5121 server.go:1019] "Starting client certificate rotation" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.055337 5121 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.055472 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.063009 5121 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.064947 5121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.065734 5121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.100833 5121 log.go:25] "Validated CRI v1 runtime API" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.125435 5121 log.go:25] "Validated CRI v1 image API" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.127484 5121 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.129630 5121 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-26-00-03-01-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.129657 5121 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.153201 5121 manager.go:217] Machine: {Timestamp:2026-01-26 00:09:30.151859384 +0000 UTC m=+1.311060559 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:30670804-6c22-4489-85ce-db46ce0b0480 BootID:9e67991f-e7b2-4959-86b5-516338602be4 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:8d:22 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:8d:22 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f5:74:52 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b4:3f:55 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:74:e6:d5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cc:50:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:0f:4a:f1:e5:8f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:de:d8:18:8a:6b Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.153467 5121 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.153629 5121 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.154822 5121 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.154871 5121 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.155134 5121 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.155151 5121 container_manager_linux.go:306] "Creating device plugin manager" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.155184 5121 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.155500 5121 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.155965 5121 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.156208 5121 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.156893 5121 kubelet.go:491] "Attempting to sync node with API server" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.156918 5121 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.156948 5121 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.157011 5121 kubelet.go:397] "Adding apiserver pod source" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.157044 5121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.158846 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.159005 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.159375 5121 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.159402 5121 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.161547 5121 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.161571 5121 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.166776 5121 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.166983 5121 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167350 5121 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167736 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167802 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167815 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167825 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167832 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167858 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167869 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167876 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167884 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167895 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.167905 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.168065 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.168270 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.168289 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.169544 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.183415 5121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.183491 5121 server.go:1295] "Started kubelet" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.183653 5121 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.183723 5121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.183879 5121 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.184366 5121 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 00:09:30 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.187530 5121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.187944 5121 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.189283 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.191858 5121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.191889 5121 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.191898 5121 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.191919 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.192444 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.192831 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e1f58e045d8dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,LastTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.193190 5121 server.go:317] "Adding debug handlers to kubelet server" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.201256 5121 factory.go:55] Registering systemd factory Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.201340 5121 factory.go:223] Registration of the systemd container factory successfully Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.201833 5121 factory.go:153] Registering CRI-O factory Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.201934 5121 factory.go:223] Registration of the crio container factory successfully Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.202048 5121 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.202128 5121 factory.go:103] Registering Raw factory Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.202206 5121 manager.go:1196] Started watching for new ooms in manager Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.203071 5121 manager.go:319] Starting recovery of all containers Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.230454 5121 manager.go:324] Recovery completed Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242330 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242443 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242461 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242474 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242486 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242507 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242525 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242540 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242558 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242575 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242590 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242603 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242618 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242632 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242652 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242670 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242687 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242703 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242716 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242730 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242745 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242780 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242796 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242808 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242820 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242835 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242853 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242867 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242888 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242901 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242915 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242940 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242957 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242972 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242985 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.242998 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243014 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243407 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243430 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243459 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243476 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243495 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243643 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243662 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243686 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243701 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243724 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243782 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243958 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.243979 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244000 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244024 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244040 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244060 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244077 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244100 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244136 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244249 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244275 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244296 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244318 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244333 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244351 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244376 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244470 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244491 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244508 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244529 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244546 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244564 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244785 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244818 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244842 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.244859 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245611 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245890 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245907 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245938 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245957 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245975 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.245990 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246010 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246024 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246041 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246066 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246112 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246141 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246160 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246176 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246200 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246222 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246254 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246272 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246292 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246311 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246333 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246358 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246377 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246423 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246442 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246460 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246480 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246501 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246527 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246545 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246569 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246587 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246616 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246634 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246650 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246672 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246688 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246814 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246838 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246858 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246880 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246897 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246918 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246933 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246951 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246973 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.246990 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247012 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247032 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247048 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247068 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247086 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247112 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247130 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247157 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247175 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247191 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247213 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247230 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247257 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247273 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247296 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247368 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247386 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247410 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247426 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247460 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247479 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247503 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247518 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247534 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247552 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247568 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247591 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247607 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247623 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247644 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247658 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247681 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247696 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247722 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247738 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247772 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247796 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247813 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247834 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247852 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247875 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247890 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247905 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247928 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247945 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247966 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247982 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.247999 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248025 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248050 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248072 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248090 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248112 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248127 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248143 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248163 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248179 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248201 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248218 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248241 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248257 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248274 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248292 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248311 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248333 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248353 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248372 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248388 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248405 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248426 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248442 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248465 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248482 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248500 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.248520 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.250910 5121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.251229 5121 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254538 5121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254596 5121 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254626 5121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254633 5121 kubelet.go:2451] "Starting kubelet main sync loop" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.254675 5121 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254560 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.254918 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255001 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255064 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255137 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255199 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255254 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255314 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255378 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255443 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255500 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255562 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255621 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255680 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255735 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255812 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255881 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.255942 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256001 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256054 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256610 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.256210 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256688 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256892 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.256936 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257206 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257232 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257254 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257374 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257394 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257409 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257426 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257439 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257455 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257465 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257478 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257490 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257500 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257512 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257524 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257536 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257546 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257560 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257570 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257582 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257596 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257607 5121 reconstruct.go:97] "Volume reconstruction finished" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.257614 5121 reconciler.go:26] "Reconciler: start to sync state" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.264032 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266011 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266058 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266686 5121 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266711 5121 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.266735 5121 state_mem.go:36] "Initialized new in-memory state store" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.271258 5121 policy_none.go:49] "None policy: Start" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.271305 5121 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.271323 5121 state_mem.go:35] "Initializing new in-memory state store" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.292278 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.336434 5121 manager.go:341] "Starting Device Plugin manager" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.336517 5121 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.336537 5121 server.go:85] "Starting device plugin registration server" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.337276 5121 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.337307 5121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.337494 5121 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.337669 5121 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.337693 5121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.343910 5121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.343997 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.355100 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.355418 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.356603 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.356651 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.356666 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.357488 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.357697 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.357773 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358010 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358042 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358055 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358535 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358552 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358859 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.358977 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359045 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359737 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359831 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359861 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.359874 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.360615 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.360796 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.360830 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361228 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361259 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361270 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361318 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361340 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.361364 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.362087 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.362188 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.362228 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.393559 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.440600 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448862 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448873 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448890 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448925 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.448936 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449471 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449536 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449568 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449543 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.449657 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.450049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.450086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.450121 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.450287 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.451969 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.457063 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466166 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466194 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466215 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466232 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466248 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466287 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.466331 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.480531 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486060 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486120 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486148 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486202 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486226 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486255 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486335 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486361 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486384 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486415 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486489 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486512 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486514 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486533 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486554 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.486749 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.487587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.487602 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.487684 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.488096 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.488122 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.501375 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.508407 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.588883 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589102 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589142 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589214 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589278 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589312 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589316 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589370 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589397 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589423 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589439 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589480 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589447 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589512 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589529 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589538 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589562 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589563 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589588 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589568 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589654 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589663 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589699 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589709 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589736 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589879 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589881 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.589947 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.651436 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.652672 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.652711 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.652723 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.652768 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.653371 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.757851 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.782060 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-2fba7b56090cf55da408a8cc1427cdada306b5ec2b808be67238f6df9d389897 WatchSource:0}: Error finding container 2fba7b56090cf55da408a8cc1427cdada306b5ec2b808be67238f6df9d389897: Status 404 returned error can't find the container with id 2fba7b56090cf55da408a8cc1427cdada306b5ec2b808be67238f6df9d389897 Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.786847 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.786960 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.787121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.788244 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: E0126 00:09:30.794467 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 26 00:09:30 crc kubenswrapper[5121]: I0126 00:09:30.809836 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.810433 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-f07285fbb2067c615bfd6e078bcb2e6f0f14d78d7ebb53c7cd82cf1951d09c99 WatchSource:0}: Error finding container f07285fbb2067c615bfd6e078bcb2e6f0f14d78d7ebb53c7cd82cf1951d09c99: Status 404 returned error can't find the container with id f07285fbb2067c615bfd6e078bcb2e6f0f14d78d7ebb53c7cd82cf1951d09c99 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.815558 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-6fccd6cf564b81188d07e524bd626bec7acfc706a5beaba1ca477b68fbc70374 WatchSource:0}: Error finding container 6fccd6cf564b81188d07e524bd626bec7acfc706a5beaba1ca477b68fbc70374: Status 404 returned error can't find the container with id 6fccd6cf564b81188d07e524bd626bec7acfc706a5beaba1ca477b68fbc70374 Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.816386 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-904626803ff1b3040d64219396e5b1bd5475d49bc62c6c40e07f7f87d4518b5a WatchSource:0}: Error finding container 904626803ff1b3040d64219396e5b1bd5475d49bc62c6c40e07f7f87d4518b5a: Status 404 returned error can't find the container with id 904626803ff1b3040d64219396e5b1bd5475d49bc62c6c40e07f7f87d4518b5a Jan 26 00:09:30 crc kubenswrapper[5121]: W0126 00:09:30.828935 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-1ec292539e9e2625c9d8154470476ef6e2007fe35b1f16d381b99d14c0446ff6 WatchSource:0}: Error finding container 1ec292539e9e2625c9d8154470476ef6e2007fe35b1f16d381b99d14c0446ff6: Status 404 returned error can't find the container with id 1ec292539e9e2625c9d8154470476ef6e2007fe35b1f16d381b99d14c0446ff6 Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.053924 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.056053 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.056107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.056118 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.056151 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.056617 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.083585 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.170948 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.259959 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1ec292539e9e2625c9d8154470476ef6e2007fe35b1f16d381b99d14c0446ff6"} Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.261065 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6fccd6cf564b81188d07e524bd626bec7acfc706a5beaba1ca477b68fbc70374"} Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.261999 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"904626803ff1b3040d64219396e5b1bd5475d49bc62c6c40e07f7f87d4518b5a"} Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.262901 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"f07285fbb2067c615bfd6e078bcb2e6f0f14d78d7ebb53c7cd82cf1951d09c99"} Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.263928 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"2fba7b56090cf55da408a8cc1427cdada306b5ec2b808be67238f6df9d389897"} Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.308968 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.468930 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.596203 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.667328 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.857059 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.858944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.859006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.859024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:31 crc kubenswrapper[5121]: I0126 00:09:31.859056 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:31 crc kubenswrapper[5121]: E0126 00:09:31.863698 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.171054 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.193732 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:32 crc kubenswrapper[5121]: E0126 00:09:32.195684 5121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.268665 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"855389b083bbd99f659c00bff24ccfcfe4dacfc551d4ac4d924b081ee77c7c3b"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.268708 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7bbee692b26148f3cf9f0ec03667a2ea9a53cb25959b0943bff5a385d689ef96"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.268720 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5e6668a0c98be81d0ab3e7d49087ddd61adef168c6096384ab6abc679063ae21"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.270305 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" exitCode=0 Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.270371 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.270550 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.271135 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.271176 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.271192 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:32 crc kubenswrapper[5121]: E0126 00:09:32.271389 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.272738 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.274160 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="48493ba4cd6896463880bf126c38470cc129ffc49b9a5434d62e7c1a72cfd70a" exitCode=0 Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.274214 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"48493ba4cd6896463880bf126c38470cc129ffc49b9a5434d62e7c1a72cfd70a"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.274224 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.274294 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.274305 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.275006 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.275737 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.275810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.275825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.275917 5121 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="5c072b777e5f70825e775dc12780df8afea0f5b80a2af913b2f4707f3cf16791" exitCode=0 Jan 26 00:09:32 crc kubenswrapper[5121]: E0126 00:09:32.276177 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.276112 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"5c072b777e5f70825e775dc12780df8afea0f5b80a2af913b2f4707f3cf16791"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.276031 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.277705 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.277743 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.277774 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:32 crc kubenswrapper[5121]: E0126 00:09:32.278354 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.279112 5121 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="aceab8de5b6b9b5b7aac302f8c51b1bbc371942a08fab2e4c6470e393504c9a7" exitCode=0 Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.279205 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"aceab8de5b6b9b5b7aac302f8c51b1bbc371942a08fab2e4c6470e393504c9a7"} Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.279398 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.281647 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.281713 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:32 crc kubenswrapper[5121]: I0126 00:09:32.281747 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:32 crc kubenswrapper[5121]: E0126 00:09:32.282134 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.026878 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.170866 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.197799 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.299163 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"8380b2123ce006828b584ca050406847ba46b7c7922aee63a5478e3a5aa5ad48"} Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.299360 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.303527 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b3dd06a6a32571d3dea75af9d72eba4643561bd7ea1f9f864c2c7c7532672697"} Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.303656 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304279 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304288 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.304462 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304856 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304873 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.304888 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.305019 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.317610 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.320264 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="42222b8358887ebe049e32c11c6a9cd8b8ffe89c147673e9c8d5eab241c976dd" exitCode=0 Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.320340 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"42222b8358887ebe049e32c11c6a9cd8b8ffe89c147673e9c8d5eab241c976dd"} Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.320567 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.321385 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.321413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.321422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.321591 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.325387 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ad1e50b5e156ce53e16af2f152f86158e6f97561c9856892a806af8c47fc2bd0"} Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.463812 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.472465 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.472506 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.472519 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:33 crc kubenswrapper[5121]: I0126 00:09:33.472580 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.473457 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.683686 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:33 crc kubenswrapper[5121]: E0126 00:09:33.871920 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.170027 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:34 crc kubenswrapper[5121]: E0126 00:09:34.286452 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.346672 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.346759 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.348056 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1681cc53263329eec4b0f547110bfd62631c16ee0fe5be25a7e0a6da06e290da"} Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.348241 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.389544 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.389602 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.389622 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:34 crc kubenswrapper[5121]: E0126 00:09:34.390084 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.396129 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e69e8687b94055a320e18610e1b59d1ca459a45702d2f973c16bd8154f57e9ac"} Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.396195 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"cba3a7bf51b9b3aea8b907fb694aab2e85364c8608cceb3b51f20260899e3e07"} Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.396480 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.396720 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397237 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397284 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397299 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:34 crc kubenswrapper[5121]: E0126 00:09:34.397539 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397546 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397584 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.397598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.398174 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:34 crc kubenswrapper[5121]: E0126 00:09:34.398214 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.399396 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.399433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:34 crc kubenswrapper[5121]: I0126 00:09:34.399449 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:34 crc kubenswrapper[5121]: E0126 00:09:34.399699 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.171278 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400083 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"73d6caae62d1eddf996b3a52c0c914ff39887d40f9d9f6b1f3cfb733932b1481"} Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400124 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400263 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400771 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400794 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.400803 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:35 crc kubenswrapper[5121]: E0126 00:09:35.400950 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.402323 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="1681cc53263329eec4b0f547110bfd62631c16ee0fe5be25a7e0a6da06e290da" exitCode=0 Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.402428 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.402670 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.402826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"1681cc53263329eec4b0f547110bfd62631c16ee0fe5be25a7e0a6da06e290da"} Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.402852 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404054 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404074 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404082 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:35 crc kubenswrapper[5121]: E0126 00:09:35.404278 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404532 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404552 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:35 crc kubenswrapper[5121]: I0126 00:09:35.404561 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:35 crc kubenswrapper[5121]: E0126 00:09:35.404678 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.397193 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.408350 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c6c473dee6611f9180a0147dfa9615abd9061bf739a35c8b33d9b9f89ba4f214"} Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.408387 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d8b08d88899e3155aea334542c3d29e049c3fe6a8fd5a73adc599a3cfdb9f448"} Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.408461 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.408493 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.408662 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409557 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409610 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409565 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409694 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.409711 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5121]: E0126 00:09:36.409964 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:36 crc kubenswrapper[5121]: E0126 00:09:36.410643 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.674156 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.675334 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.675372 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.675383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:36 crc kubenswrapper[5121]: I0126 00:09:36.675412 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.450497 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.450727 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"39fadc7615166dd56bf25845a793eb9c01587806d3264633a20a97b533c6fa3b"} Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.450773 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"01aedcd0f79d1f08ebd2577b28f1e2ba6eb05c07136164d871b41177d002aa8f"} Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.451084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.451105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:37 crc kubenswrapper[5121]: I0126 00:09:37.451121 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:37 crc kubenswrapper[5121]: E0126 00:09:37.451408 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.045234 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.090737 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.091083 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.092368 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.092409 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.092428 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:38 crc kubenswrapper[5121]: E0126 00:09:38.092789 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.460631 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.460917 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.460035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"807648f892becfbf951e609e7a97a73c8526e65edcca8aa7fc64db39254d77fc"} Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.463324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.463357 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.463371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:38 crc kubenswrapper[5121]: E0126 00:09:38.463885 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.464287 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.464327 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.464351 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:38 crc kubenswrapper[5121]: E0126 00:09:38.465278 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.536178 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.536402 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.537999 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.538074 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:38 crc kubenswrapper[5121]: I0126 00:09:38.538104 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:38 crc kubenswrapper[5121]: E0126 00:09:38.538646 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.106443 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.112603 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.462568 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.462644 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.462573 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463412 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463445 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5121]: E0126 00:09:39.463718 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463852 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463863 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463815 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463916 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:39 crc kubenswrapper[5121]: I0126 00:09:39.463942 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:39 crc kubenswrapper[5121]: E0126 00:09:39.464299 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:39 crc kubenswrapper[5121]: E0126 00:09:39.464434 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:40 crc kubenswrapper[5121]: E0126 00:09:40.344342 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.470271 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.470489 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.472259 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.472306 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.472317 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:41 crc kubenswrapper[5121]: E0126 00:09:41.472808 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.536607 5121 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 26 00:09:41 crc kubenswrapper[5121]: I0126 00:09:41.536739 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.484850 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.485104 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.486047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.486078 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.486087 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:42 crc kubenswrapper[5121]: E0126 00:09:42.486335 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:42 crc kubenswrapper[5121]: I0126 00:09:42.492308 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:43 crc kubenswrapper[5121]: I0126 00:09:43.472306 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:43 crc kubenswrapper[5121]: I0126 00:09:43.472949 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:43 crc kubenswrapper[5121]: I0126 00:09:43.473119 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:43 crc kubenswrapper[5121]: I0126 00:09:43.473141 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:43 crc kubenswrapper[5121]: E0126 00:09:43.474015 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:43 crc kubenswrapper[5121]: I0126 00:09:43.477630 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:44 crc kubenswrapper[5121]: I0126 00:09:44.474868 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:44 crc kubenswrapper[5121]: I0126 00:09:44.475499 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:44 crc kubenswrapper[5121]: I0126 00:09:44.475539 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:44 crc kubenswrapper[5121]: I0126 00:09:44.475549 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:44 crc kubenswrapper[5121]: E0126 00:09:44.475861 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:46 crc kubenswrapper[5121]: I0126 00:09:46.171599 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 00:09:46 crc kubenswrapper[5121]: E0126 00:09:46.401235 5121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 26 00:09:46 crc kubenswrapper[5121]: E0126 00:09:46.401299 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 26 00:09:46 crc kubenswrapper[5121]: E0126 00:09:46.676961 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 00:09:46 crc kubenswrapper[5121]: I0126 00:09:46.909839 5121 trace.go:236] Trace[1542021755]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:36.907) (total time: 10002ms): Jan 26 00:09:46 crc kubenswrapper[5121]: Trace[1542021755]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:09:46.909) Jan 26 00:09:46 crc kubenswrapper[5121]: Trace[1542021755]: [10.002162463s] [10.002162463s] END Jan 26 00:09:46 crc kubenswrapper[5121]: E0126 00:09:46.910473 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.519125 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.521694 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="73d6caae62d1eddf996b3a52c0c914ff39887d40f9d9f6b1f3cfb733932b1481" exitCode=255 Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.521808 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"73d6caae62d1eddf996b3a52c0c914ff39887d40f9d9f6b1f3cfb733932b1481"} Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.522056 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.522887 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.522939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.522953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:47 crc kubenswrapper[5121]: E0126 00:09:47.523312 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.523590 5121 scope.go:117] "RemoveContainer" containerID="73d6caae62d1eddf996b3a52c0c914ff39887d40f9d9f6b1f3cfb733932b1481" Jan 26 00:09:47 crc kubenswrapper[5121]: E0126 00:09:47.767390 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188e1f58e045d8dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,LastTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.943972 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.944795 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.948136 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.948185 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:47 crc kubenswrapper[5121]: I0126 00:09:47.948195 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:47 crc kubenswrapper[5121]: E0126 00:09:47.948808 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.013085 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.045929 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.046045 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.388989 5121 trace.go:236] Trace[1656310257]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:38.387) (total time: 10001ms): Jan 26 00:09:48 crc kubenswrapper[5121]: Trace[1656310257]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:48.388) Jan 26 00:09:48 crc kubenswrapper[5121]: Trace[1656310257]: [10.001124536s] [10.001124536s] END Jan 26 00:09:48 crc kubenswrapper[5121]: E0126 00:09:48.389038 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.516308 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.516377 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.526500 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.527867 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec"} Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528050 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528155 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528693 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528725 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528737 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528775 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.528798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:48 crc kubenswrapper[5121]: E0126 00:09:48.529083 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:48 crc kubenswrapper[5121]: E0126 00:09:48.529394 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:48 crc kubenswrapper[5121]: I0126 00:09:48.543573 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 00:09:49 crc kubenswrapper[5121]: I0126 00:09:49.538660 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:49 crc kubenswrapper[5121]: I0126 00:09:49.539953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:49 crc kubenswrapper[5121]: I0126 00:09:49.540006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:49 crc kubenswrapper[5121]: I0126 00:09:49.540025 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:49 crc kubenswrapper[5121]: E0126 00:09:49.540791 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:50 crc kubenswrapper[5121]: E0126 00:09:50.344686 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:09:51 crc kubenswrapper[5121]: I0126 00:09:51.537893 5121 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:09:51 crc kubenswrapper[5121]: I0126 00:09:51.537992 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:09:52 crc kubenswrapper[5121]: E0126 00:09:52.805138 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.050793 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.051222 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.051327 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.052274 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.052370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.052429 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:53 crc kubenswrapper[5121]: E0126 00:09:53.052752 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.055546 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.078124 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.080200 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.080374 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.080484 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.080607 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:09:53 crc kubenswrapper[5121]: E0126 00:09:53.093303 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.510183 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.510186 5121 trace.go:236] Trace[76604466]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:38.666) (total time: 14843ms): Jan 26 00:09:53 crc kubenswrapper[5121]: Trace[76604466]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14843ms (00:09:53.510) Jan 26 00:09:53 crc kubenswrapper[5121]: Trace[76604466]: [14.843438946s] [14.843438946s] END Jan 26 00:09:53 crc kubenswrapper[5121]: E0126 00:09:53.510284 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.526285 5121 trace.go:236] Trace[233750376]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:09:39.778) (total time: 13747ms): Jan 26 00:09:53 crc kubenswrapper[5121]: Trace[233750376]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13747ms (00:09:53.526) Jan 26 00:09:53 crc kubenswrapper[5121]: Trace[233750376]: [13.747979399s] [13.747979399s] END Jan 26 00:09:53 crc kubenswrapper[5121]: E0126 00:09:53.526721 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.557572 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.559189 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.559349 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:53 crc kubenswrapper[5121]: I0126 00:09:53.559476 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:53 crc kubenswrapper[5121]: E0126 00:09:53.560211 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.227104 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:54 crc kubenswrapper[5121]: E0126 00:09:54.520168 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.570948 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.573963 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.576696 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" exitCode=255 Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.576730 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec"} Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.576803 5121 scope.go:117] "RemoveContainer" containerID="73d6caae62d1eddf996b3a52c0c914ff39887d40f9d9f6b1f3cfb733932b1481" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.576931 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.578023 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.578211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.578336 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:54 crc kubenswrapper[5121]: E0126 00:09:54.578993 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.579482 5121 scope.go:117] "RemoveContainer" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" Jan 26 00:09:54 crc kubenswrapper[5121]: E0126 00:09:54.580054 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.947610 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 26 00:09:54 crc kubenswrapper[5121]: I0126 00:09:54.965752 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.175025 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.582294 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.584801 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.585435 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.585475 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.585492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:55 crc kubenswrapper[5121]: E0126 00:09:55.585945 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:55 crc kubenswrapper[5121]: I0126 00:09:55.586329 5121 scope.go:117] "RemoveContainer" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" Jan 26 00:09:55 crc kubenswrapper[5121]: E0126 00:09:55.586580 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:09:56 crc kubenswrapper[5121]: I0126 00:09:56.185218 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:57 crc kubenswrapper[5121]: I0126 00:09:57.227076 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.613464 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.774999 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e045d8dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,LastTimestamp:2026-01-26 00:09:30.183440604 +0000 UTC m=+1.342641729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.781783 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.786364 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.797031 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.811121 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e9a543a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.340688807 +0000 UTC m=+1.499889932,LastTimestamp:2026-01-26 00:09:30.340688807 +0000 UTC m=+1.499889932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.822161 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.356633216 +0000 UTC m=+1.515834351,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.842684 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.356659447 +0000 UTC m=+1.515860572,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.849332 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.356672258 +0000 UTC m=+1.515873383,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.858067 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.358030658 +0000 UTC m=+1.517231783,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.868543 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.358050129 +0000 UTC m=+1.517251254,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.874814 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.358061339 +0000 UTC m=+1.517262464,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.881420 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.358519353 +0000 UTC m=+1.517720478,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.886659 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.358543324 +0000 UTC m=+1.517744449,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.892990 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.358560354 +0000 UTC m=+1.517761479,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.897228 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.35972767 +0000 UTC m=+1.518928795,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.901380 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.35974474 +0000 UTC m=+1.518945865,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.906079 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.359769311 +0000 UTC m=+1.518970436,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.910466 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.359844513 +0000 UTC m=+1.519045638,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.917138 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.359868004 +0000 UTC m=+1.519069119,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.922100 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.359880384 +0000 UTC m=+1.519081509,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.926195 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.361246535 +0000 UTC m=+1.520447660,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.934521 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.361265676 +0000 UTC m=+1.520466801,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.941796 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5329297\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5329297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.266063511 +0000 UTC m=+1.425264636,LastTimestamp:2026-01-26 00:09:30.361276516 +0000 UTC m=+1.520477641,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.947137 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e5322b6c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e5322b6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.2660371 +0000 UTC m=+1.425238225,LastTimestamp:2026-01-26 00:09:30.361331728 +0000 UTC m=+1.520532853,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.951689 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188e1f58e532696a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188e1f58e532696a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.26605297 +0000 UTC m=+1.425254095,LastTimestamp:2026-01-26 00:09:30.361348648 +0000 UTC m=+1.520549773,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.957204 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f59044710f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.787500274 +0000 UTC m=+1.946701389,LastTimestamp:2026-01-26 00:09:30.787500274 +0000 UTC m=+1.946701389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.962117 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5905f81fa8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.815881128 +0000 UTC m=+1.975082263,LastTimestamp:2026-01-26 00:09:30.815881128 +0000 UTC m=+1.975082263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.966126 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59064c09f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.821380593 +0000 UTC m=+1.980581728,LastTimestamp:2026-01-26 00:09:30.821380593 +0000 UTC m=+1.980581728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.970564 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f59065695f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.822071794 +0000 UTC m=+1.981272919,LastTimestamp:2026-01-26 00:09:30.822071794 +0000 UTC m=+1.981272919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.976282 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f59070e455b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:30.834109787 +0000 UTC m=+1.993310912,LastTimestamp:2026-01-26 00:09:30.834109787 +0000 UTC m=+1.993310912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.981753 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f592503ea18 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.336747544 +0000 UTC m=+2.495948669,LastTimestamp:2026-01-26 00:09:31.336747544 +0000 UTC m=+2.495948669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.986626 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f592503fb2a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.336751914 +0000 UTC m=+2.495953049,LastTimestamp:2026-01-26 00:09:31.336751914 +0000 UTC m=+2.495953049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.993505 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5925048f19 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.336789785 +0000 UTC m=+2.495990910,LastTimestamp:2026-01-26 00:09:31.336789785 +0000 UTC m=+2.495990910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:57 crc kubenswrapper[5121]: E0126 00:09:57.997883 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f592504b3be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.336799166 +0000 UTC m=+2.496000291,LastTimestamp:2026-01-26 00:09:31.336799166 +0000 UTC m=+2.496000291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.001809 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f59278cefd4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.379281876 +0000 UTC m=+2.538483171,LastTimestamp:2026-01-26 00:09:31.379281876 +0000 UTC m=+2.538483171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.008427 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59278d67cb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.379312587 +0000 UTC m=+2.538513752,LastTimestamp:2026-01-26 00:09:31.379312587 +0000 UTC m=+2.538513752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.012476 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f59278f55fc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.3794391 +0000 UTC m=+2.538640265,LastTimestamp:2026-01-26 00:09:31.3794391 +0000 UTC m=+2.538640265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.016387 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5927916d50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.379576144 +0000 UTC m=+2.538777269,LastTimestamp:2026-01-26 00:09:31.379576144 +0000 UTC m=+2.538777269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.023946 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5927b2979d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.381749661 +0000 UTC m=+2.540950786,LastTimestamp:2026-01-26 00:09:31.381749661 +0000 UTC m=+2.540950786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.032014 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f592930bff2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.406794738 +0000 UTC m=+2.565995863,LastTimestamp:2026-01-26 00:09:31.406794738 +0000 UTC m=+2.565995863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.097074 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f593a4ad257 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.693716055 +0000 UTC m=+2.852917180,LastTimestamp:2026-01-26 00:09:31.693716055 +0000 UTC m=+2.852917180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.098873 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f593b0af6f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.706308345 +0000 UTC m=+2.865509470,LastTimestamp:2026-01-26 00:09:31.706308345 +0000 UTC m=+2.865509470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.102067 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f593b1c5629 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.707446825 +0000 UTC m=+2.866647950,LastTimestamp:2026-01-26 00:09:31.707446825 +0000 UTC m=+2.866647950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.105265 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5945e1c7c9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:31.888158665 +0000 UTC m=+3.047359790,LastTimestamp:2026-01-26 00:09:31.888158665 +0000 UTC m=+3.047359790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.108590 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f595a65a367 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.232344423 +0000 UTC m=+3.391545548,LastTimestamp:2026-01-26 00:09:32.232344423 +0000 UTC m=+3.391545548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.110384 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f595ae9b7ef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.241000431 +0000 UTC m=+3.400201556,LastTimestamp:2026-01-26 00:09:32.241000431 +0000 UTC m=+3.400201556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.113188 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f595af7c6bd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.241921725 +0000 UTC m=+3.401122850,LastTimestamp:2026-01-26 00:09:32.241921725 +0000 UTC m=+3.401122850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.114940 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f595ccb51d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.272562648 +0000 UTC m=+3.431763793,LastTimestamp:2026-01-26 00:09:32.272562648 +0000 UTC m=+3.431763793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.118152 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f595d1b39b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.277799346 +0000 UTC m=+3.437000481,LastTimestamp:2026-01-26 00:09:32.277799346 +0000 UTC m=+3.437000481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.123888 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f595d361fda openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.279562202 +0000 UTC m=+3.438763327,LastTimestamp:2026-01-26 00:09:32.279562202 +0000 UTC m=+3.438763327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.128352 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f595d6ec9f9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.283275769 +0000 UTC m=+3.442476904,LastTimestamp:2026-01-26 00:09:32.283275769 +0000 UTC m=+3.442476904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.132611 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f5984473086 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.934992006 +0000 UTC m=+4.094193131,LastTimestamp:2026-01-26 00:09:32.934992006 +0000 UTC m=+4.094193131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.136283 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f59845e3267 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:32.936499815 +0000 UTC m=+4.095700940,LastTimestamp:2026-01-26 00:09:32.936499815 +0000 UTC m=+4.095700940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.140204 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e1f598e3d543e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.10211795 +0000 UTC m=+4.261319075,LastTimestamp:2026-01-26 00:09:33.10211795 +0000 UTC m=+4.261319075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.143809 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f598f85ae80 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.123636864 +0000 UTC m=+4.282838019,LastTimestamp:2026-01-26 00:09:33.123636864 +0000 UTC m=+4.282838019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.149558 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f598fa7cb95 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.125872533 +0000 UTC m=+4.285073688,LastTimestamp:2026-01-26 00:09:33.125872533 +0000 UTC m=+4.285073688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.153900 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f598fa830f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.125898484 +0000 UTC m=+4.285099649,LastTimestamp:2026-01-26 00:09:33.125898484 +0000 UTC m=+4.285099649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.158265 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f598faa6d50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.126045008 +0000 UTC m=+4.285246173,LastTimestamp:2026-01-26 00:09:33.126045008 +0000 UTC m=+4.285246173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.163815 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f598fc07bde openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.127490526 +0000 UTC m=+4.286691691,LastTimestamp:2026-01-26 00:09:33.127490526 +0000 UTC m=+4.286691691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.167610 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5990f10fad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.147451309 +0000 UTC m=+4.306652434,LastTimestamp:2026-01-26 00:09:33.147451309 +0000 UTC m=+4.306652434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.171777 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.171860 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5990fabd2a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.148085546 +0000 UTC m=+4.307286671,LastTimestamp:2026-01-26 00:09:33.148085546 +0000 UTC m=+4.307286671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.174371 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5990ff7aac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.148396204 +0000 UTC m=+4.307597329,LastTimestamp:2026-01-26 00:09:33.148396204 +0000 UTC m=+4.307597329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.176381 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5991182d57 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.150014807 +0000 UTC m=+4.309215922,LastTimestamp:2026-01-26 00:09:33.150014807 +0000 UTC m=+4.309215922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.178280 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f599b66f271 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.322949233 +0000 UTC m=+4.482150358,LastTimestamp:2026-01-26 00:09:33.322949233 +0000 UTC m=+4.482150358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.181390 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59a7e0518b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.532230027 +0000 UTC m=+4.691431162,LastTimestamp:2026-01-26 00:09:33.532230027 +0000 UTC m=+4.691431162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.185842 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f59a8128c29 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.535521833 +0000 UTC m=+4.694722958,LastTimestamp:2026-01-26 00:09:33.535521833 +0000 UTC m=+4.694722958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.192280 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f59ad7ead01 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.626494209 +0000 UTC m=+4.785695334,LastTimestamp:2026-01-26 00:09:33.626494209 +0000 UTC m=+4.785695334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.197753 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f59ad90bc0f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.627677711 +0000 UTC m=+4.786878836,LastTimestamp:2026-01-26 00:09:33.627677711 +0000 UTC m=+4.786878836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.200894 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59ad96d243 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.628076611 +0000 UTC m=+4.787277736,LastTimestamp:2026-01-26 00:09:33.628076611 +0000 UTC m=+4.787277736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.203450 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59ada5fd73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:33.629070707 +0000 UTC m=+4.788271832,LastTimestamp:2026-01-26 00:09:33.629070707 +0000 UTC m=+4.788271832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.205459 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f59d237ac96 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.242598038 +0000 UTC m=+5.401799163,LastTimestamp:2026-01-26 00:09:34.242598038 +0000 UTC m=+5.401799163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.209708 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59d2b5283f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.250821695 +0000 UTC m=+5.410022830,LastTimestamp:2026-01-26 00:09:34.250821695 +0000 UTC m=+5.410022830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.213937 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f59d2b8f35d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.251070301 +0000 UTC m=+5.410271436,LastTimestamp:2026-01-26 00:09:34.251070301 +0000 UTC m=+5.410271436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.219543 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59d4c26586 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.285243782 +0000 UTC m=+5.444444907,LastTimestamp:2026-01-26 00:09:34.285243782 +0000 UTC m=+5.444444907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.227497 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f59d4d678d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.286559446 +0000 UTC m=+5.445760571,LastTimestamp:2026-01-26 00:09:34.286559446 +0000 UTC m=+5.445760571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.235107 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188e1f59d4d6839e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.286562206 +0000 UTC m=+5.445763331,LastTimestamp:2026-01-26 00:09:34.286562206 +0000 UTC m=+5.445763331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.239782 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f59d4ea0b98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:34.2878422 +0000 UTC m=+5.447043325,LastTimestamp:2026-01-26 00:09:34.2878422 +0000 UTC m=+5.447043325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.245010 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a03a93c36 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.072123958 +0000 UTC m=+6.231325073,LastTimestamp:2026-01-26 00:09:35.072123958 +0000 UTC m=+6.231325073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.249550 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a075668d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.133804753 +0000 UTC m=+6.293005878,LastTimestamp:2026-01-26 00:09:35.133804753 +0000 UTC m=+6.293005878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.253976 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a077a3f47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.136153415 +0000 UTC m=+6.295354540,LastTimestamp:2026-01-26 00:09:35.136153415 +0000 UTC m=+6.295354540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.257653 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1673fc2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.387401263 +0000 UTC m=+6.546602408,LastTimestamp:2026-01-26 00:09:35.387401263 +0000 UTC m=+6.546602408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.261671 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1709e900 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.397226752 +0000 UTC m=+6.556427887,LastTimestamp:2026-01-26 00:09:35.397226752 +0000 UTC m=+6.556427887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.265813 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a178b553f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.405708607 +0000 UTC m=+6.564909742,LastTimestamp:2026-01-26 00:09:35.405708607 +0000 UTC m=+6.564909742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.269321 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a298dad71 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.707852145 +0000 UTC m=+6.867053260,LastTimestamp:2026-01-26 00:09:35.707852145 +0000 UTC m=+6.867053260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.273329 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a2a67af52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.722139474 +0000 UTC m=+6.881340599,LastTimestamp:2026-01-26 00:09:35.722139474 +0000 UTC m=+6.881340599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.279069 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a2a871423 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.724196899 +0000 UTC m=+6.883398024,LastTimestamp:2026-01-26 00:09:35.724196899 +0000 UTC m=+6.883398024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.283169 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a3f8d8adc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.076942044 +0000 UTC m=+7.236143179,LastTimestamp:2026-01-26 00:09:36.076942044 +0000 UTC m=+7.236143179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.286953 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a405d424a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.090554954 +0000 UTC m=+7.249756079,LastTimestamp:2026-01-26 00:09:36.090554954 +0000 UTC m=+7.249756079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.290695 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a40803838 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.092846136 +0000 UTC m=+7.252047261,LastTimestamp:2026-01-26 00:09:36.092846136 +0000 UTC m=+7.252047261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.297171 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a531ecdae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.405228974 +0000 UTC m=+7.564430119,LastTimestamp:2026-01-26 00:09:36.405228974 +0000 UTC m=+7.564430119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.302629 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a5aab9e81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.531897985 +0000 UTC m=+7.691099130,LastTimestamp:2026-01-26 00:09:36.531897985 +0000 UTC m=+7.691099130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.307898 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a5ac0275e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:36.533243742 +0000 UTC m=+7.692444857,LastTimestamp:2026-01-26 00:09:36.533243742 +0000 UTC m=+7.692444857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.313777 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a77c7f6c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:37.020294855 +0000 UTC m=+8.179496000,LastTimestamp:2026-01-26 00:09:37.020294855 +0000 UTC m=+8.179496000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.318967 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a79524053 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:37.046134867 +0000 UTC m=+8.205335992,LastTimestamp:2026-01-26 00:09:37.046134867 +0000 UTC m=+8.205335992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.323123 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a7969c56d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:37.047676269 +0000 UTC m=+8.206877384,LastTimestamp:2026-01-26 00:09:37.047676269 +0000 UTC m=+8.206877384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.328621 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a938ed28d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:37.486312077 +0000 UTC m=+8.645513202,LastTimestamp:2026-01-26 00:09:37.486312077 +0000 UTC m=+8.645513202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.334169 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188e1f5a9ce5714a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:37.642983754 +0000 UTC m=+8.802184879,LastTimestamp:2026-01-26 00:09:37.642983754 +0000 UTC m=+8.802184879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.340662 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:58 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f5b84faa4a4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 26 00:09:58 crc kubenswrapper[5121]: body: Jan 26 00:09:58 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:41.536687268 +0000 UTC m=+12.695888393,LastTimestamp:2026-01-26 00:09:41.536687268 +0000 UTC m=+12.695888393,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:58 crc kubenswrapper[5121]: > Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.345541 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5b84fd1a26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:41.536848422 +0000 UTC m=+12.696049547,LastTimestamp:2026-01-26 00:09:41.536848422 +0000 UTC m=+12.696049547,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.350640 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a077a3f47\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a077a3f47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.136153415 +0000 UTC m=+6.295354540,LastTimestamp:2026-01-26 00:09:47.524575207 +0000 UTC m=+18.683776332,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.355298 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a1673fc2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1673fc2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.387401263 +0000 UTC m=+6.546602408,LastTimestamp:2026-01-26 00:09:47.938064022 +0000 UTC m=+19.097265147,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.360515 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a1709e900\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1709e900 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.397226752 +0000 UTC m=+6.556427887,LastTimestamp:2026-01-26 00:09:47.957965518 +0000 UTC m=+19.117166643,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.365967 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:58 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5d08f6b1f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded Jan 26 00:09:58 crc kubenswrapper[5121]: body: Jan 26 00:09:58 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:48.045988342 +0000 UTC m=+19.205189467,LastTimestamp:2026-01-26 00:09:48.045988342 +0000 UTC m=+19.205189467,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:58 crc kubenswrapper[5121]: > Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.370910 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5d08f7f131 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:48.046070065 +0000 UTC m=+19.205271190,LastTimestamp:2026-01-26 00:09:48.046070065 +0000 UTC m=+19.205271190,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.375282 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 26 00:09:58 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.188e1f5d24ffeb25 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 26 00:09:58 crc kubenswrapper[5121]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 00:09:58 crc kubenswrapper[5121]: Jan 26 00:09:58 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:48.516354853 +0000 UTC m=+19.675555978,LastTimestamp:2026-01-26 00:09:48.516354853 +0000 UTC m=+19.675555978,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:58 crc kubenswrapper[5121]: > Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.380736 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5d25009896 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:48.516399254 +0000 UTC m=+19.675600379,LastTimestamp:2026-01-26 00:09:48.516399254 +0000 UTC m=+19.675600379,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.385987 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 26 00:09:58 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-controller-manager-crc.188e1f5dd91a0bb1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 26 00:09:58 crc kubenswrapper[5121]: body: Jan 26 00:09:58 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:51.537966001 +0000 UTC m=+22.697167126,LastTimestamp:2026-01-26 00:09:51.537966001 +0000 UTC m=+22.697167126,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 26 00:09:58 crc kubenswrapper[5121]: > Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.393259 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188e1f5dd91ad651 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:51.538017873 +0000 UTC m=+22.697218998,LastTimestamp:2026-01-26 00:09:51.538017873 +0000 UTC m=+22.697218998,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.399137 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.404634 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5e8e6b9c69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:09:55.586549712 +0000 UTC m=+26.745750857,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.542817 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.543233 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.544515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.544637 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.544750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.545182 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.551524 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.594527 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.595606 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.595659 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:58 crc kubenswrapper[5121]: I0126 00:09:58.595673 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:58 crc kubenswrapper[5121]: E0126 00:09:58.596183 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:09:59 crc kubenswrapper[5121]: I0126 00:09:59.174885 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:09:59 crc kubenswrapper[5121]: E0126 00:09:59.816262 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.146933 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.183086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.183383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.183540 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.183681 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:00 crc kubenswrapper[5121]: I0126 00:10:00.355733 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:00 crc kubenswrapper[5121]: E0126 00:10:00.356138 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:00 crc kubenswrapper[5121]: E0126 00:10:00.374805 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.178430 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.393195 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.393655 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.394989 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.395152 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.395437 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:01 crc kubenswrapper[5121]: E0126 00:10:01.396097 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:01 crc kubenswrapper[5121]: I0126 00:10:01.396505 5121 scope.go:117] "RemoveContainer" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" Jan 26 00:10:01 crc kubenswrapper[5121]: E0126 00:10:01.396855 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:01 crc kubenswrapper[5121]: E0126 00:10:01.471410 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5e8e6b9c69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:10:01.396814844 +0000 UTC m=+32.556015969,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:02 crc kubenswrapper[5121]: I0126 00:10:02.176741 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:03 crc kubenswrapper[5121]: I0126 00:10:03.178623 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:03 crc kubenswrapper[5121]: E0126 00:10:03.190613 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:10:03 crc kubenswrapper[5121]: E0126 00:10:03.856449 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:10:04 crc kubenswrapper[5121]: I0126 00:10:04.174289 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:05 crc kubenswrapper[5121]: I0126 00:10:05.176285 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:06 crc kubenswrapper[5121]: I0126 00:10:06.175278 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:06 crc kubenswrapper[5121]: E0126 00:10:06.822858 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.177303 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.375883 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.377745 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.377854 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.377882 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:07 crc kubenswrapper[5121]: I0126 00:10:07.377932 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:07 crc kubenswrapper[5121]: E0126 00:10:07.390303 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:08 crc kubenswrapper[5121]: I0126 00:10:08.175737 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:09 crc kubenswrapper[5121]: I0126 00:10:09.176441 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:09 crc kubenswrapper[5121]: E0126 00:10:09.680225 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 26 00:10:10 crc kubenswrapper[5121]: I0126 00:10:10.175516 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:10 crc kubenswrapper[5121]: E0126 00:10:10.356674 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:11 crc kubenswrapper[5121]: I0126 00:10:11.174746 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:12 crc kubenswrapper[5121]: I0126 00:10:12.179999 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.176697 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.255209 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.256389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.256433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.256447 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.256818 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.257074 5121 scope.go:117] "RemoveContainer" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.263806 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a077a3f47\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a077a3f47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.136153415 +0000 UTC m=+6.295354540,LastTimestamp:2026-01-26 00:10:13.258463267 +0000 UTC m=+44.417664392,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.468750 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a1673fc2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1673fc2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.387401263 +0000 UTC m=+6.546602408,LastTimestamp:2026-01-26 00:10:13.46319998 +0000 UTC m=+44.622401105,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.488006 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5a1709e900\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5a1709e900 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:35.397226752 +0000 UTC m=+6.556427887,LastTimestamp:2026-01-26 00:10:13.481200961 +0000 UTC m=+44.640402086,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.829401 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.948357 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.951691 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0"} Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.951944 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.952774 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.952857 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:13 crc kubenswrapper[5121]: I0126 00:10:13.952882 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:13 crc kubenswrapper[5121]: E0126 00:10:13.953577 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.176047 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.390469 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.391821 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.391866 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.391880 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.391908 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:14 crc kubenswrapper[5121]: E0126 00:10:14.401980 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.956891 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.957593 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.959714 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" exitCode=255 Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.959804 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0"} Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.959876 5121 scope.go:117] "RemoveContainer" containerID="4596dbb643ec05375246e1756ece77bed9e843cceb90cf940189a6bd4443dbec" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.960061 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.960625 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.960664 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.960675 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:14 crc kubenswrapper[5121]: E0126 00:10:14.964326 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:14 crc kubenswrapper[5121]: I0126 00:10:14.964984 5121 scope.go:117] "RemoveContainer" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" Jan 26 00:10:14 crc kubenswrapper[5121]: E0126 00:10:14.965541 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:14 crc kubenswrapper[5121]: E0126 00:10:14.971997 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5e8e6b9c69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:10:14.965488118 +0000 UTC m=+46.124689253,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:15 crc kubenswrapper[5121]: I0126 00:10:15.178492 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:15 crc kubenswrapper[5121]: I0126 00:10:15.964871 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:16 crc kubenswrapper[5121]: I0126 00:10:16.176011 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:17 crc kubenswrapper[5121]: I0126 00:10:17.176957 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:18 crc kubenswrapper[5121]: E0126 00:10:18.018631 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 26 00:10:18 crc kubenswrapper[5121]: I0126 00:10:18.177878 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:18 crc kubenswrapper[5121]: E0126 00:10:18.833444 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 26 00:10:19 crc kubenswrapper[5121]: I0126 00:10:19.176272 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:20 crc kubenswrapper[5121]: I0126 00:10:20.177021 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:20 crc kubenswrapper[5121]: E0126 00:10:20.357031 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:20 crc kubenswrapper[5121]: E0126 00:10:20.834697 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.175604 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.392859 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.393139 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.394360 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.394482 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.394515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5121]: E0126 00:10:21.395425 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.395940 5121 scope.go:117] "RemoveContainer" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" Jan 26 00:10:21 crc kubenswrapper[5121]: E0126 00:10:21.396324 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.402278 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:21 crc kubenswrapper[5121]: E0126 00:10:21.402890 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5e8e6b9c69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:10:21.396266657 +0000 UTC m=+52.555467802,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.403803 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.403864 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.403880 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:21 crc kubenswrapper[5121]: I0126 00:10:21.403913 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:21 crc kubenswrapper[5121]: E0126 00:10:21.416997 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:22 crc kubenswrapper[5121]: I0126 00:10:22.178812 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.175321 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.952722 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.953245 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.954346 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.954510 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.954602 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:23 crc kubenswrapper[5121]: E0126 00:10:23.955126 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:23 crc kubenswrapper[5121]: I0126 00:10:23.955552 5121 scope.go:117] "RemoveContainer" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" Jan 26 00:10:23 crc kubenswrapper[5121]: E0126 00:10:23.955892 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:23 crc kubenswrapper[5121]: E0126 00:10:23.962225 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188e1f5e8e6b9c69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188e1f5e8e6b9c69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:09:54.579987561 +0000 UTC m=+25.739188686,LastTimestamp:2026-01-26 00:10:23.955847301 +0000 UTC m=+55.115048426,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:24 crc kubenswrapper[5121]: I0126 00:10:24.176402 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:25 crc kubenswrapper[5121]: I0126 00:10:25.176007 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.175233 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.414367 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.414690 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.415718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.415771 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:26 crc kubenswrapper[5121]: I0126 00:10:26.415783 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:26 crc kubenswrapper[5121]: E0126 00:10:26.416128 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:27 crc kubenswrapper[5121]: I0126 00:10:27.176076 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:27 crc kubenswrapper[5121]: E0126 00:10:27.842827 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:28 crc kubenswrapper[5121]: E0126 00:10:28.073860 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.175280 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.417956 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.419552 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.420033 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.420208 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:28 crc kubenswrapper[5121]: I0126 00:10:28.420355 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:28 crc kubenswrapper[5121]: E0126 00:10:28.438551 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:29 crc kubenswrapper[5121]: I0126 00:10:29.938557 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:30 crc kubenswrapper[5121]: I0126 00:10:30.176248 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:30 crc kubenswrapper[5121]: E0126 00:10:30.358232 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:31 crc kubenswrapper[5121]: I0126 00:10:31.174847 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:32 crc kubenswrapper[5121]: I0126 00:10:32.175474 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:33 crc kubenswrapper[5121]: I0126 00:10:33.176269 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:34 crc kubenswrapper[5121]: I0126 00:10:34.175601 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:34 crc kubenswrapper[5121]: E0126 00:10:34.849383 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.176721 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.438904 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.440488 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.440517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.440528 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.440548 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:35 crc kubenswrapper[5121]: E0126 00:10:35.451008 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.764558 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jbdd2" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.771362 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jbdd2" Jan 26 00:10:35 crc kubenswrapper[5121]: I0126 00:10:35.849018 5121 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 00:10:36 crc kubenswrapper[5121]: I0126 00:10:36.055611 5121 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 00:10:36 crc kubenswrapper[5121]: I0126 00:10:36.773330 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-25 00:05:35 +0000 UTC" deadline="2026-02-18 15:06:22.017996171 +0000 UTC" Jan 26 00:10:36 crc kubenswrapper[5121]: I0126 00:10:36.773451 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="566h55m45.244563451s" Jan 26 00:10:37 crc kubenswrapper[5121]: I0126 00:10:37.255934 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:37 crc kubenswrapper[5121]: I0126 00:10:37.257626 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:37 crc kubenswrapper[5121]: I0126 00:10:37.257715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:37 crc kubenswrapper[5121]: I0126 00:10:37.257741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:37 crc kubenswrapper[5121]: E0126 00:10:37.258576 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:37 crc kubenswrapper[5121]: I0126 00:10:37.259101 5121 scope.go:117] "RemoveContainer" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.039327 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.041986 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.042288 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.042934 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.043190 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:39 crc kubenswrapper[5121]: I0126 00:10:39.043219 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:39 crc kubenswrapper[5121]: E0126 00:10:39.043660 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:40 crc kubenswrapper[5121]: E0126 00:10:40.359645 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.050417 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.051890 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.054412 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" exitCode=255 Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.054700 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.054937 5121 scope.go:117] "RemoveContainer" containerID="968a37ae2ec32a0680b91b4eb114669ebc855afda1f822b8752b2209f22da8f0" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.055291 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.056753 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.056828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.056847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:41 crc kubenswrapper[5121]: E0126 00:10:41.057609 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.057978 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:10:41 crc kubenswrapper[5121]: E0126 00:10:41.058275 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:41 crc kubenswrapper[5121]: I0126 00:10:41.392286 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.059882 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.062557 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.063624 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.063667 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.063680 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.064280 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.064511 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.064724 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.452094 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.454485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.454570 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.454597 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.454889 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.470375 5121 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.470953 5121 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.470996 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.475669 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.475711 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.475722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.475738 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.475754 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.493999 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.501830 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.501938 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.501961 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.501985 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.502005 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.520242 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.529360 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.529422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.529440 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.529464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.529483 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.545309 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.557158 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.557189 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.557198 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.557233 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.557247 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:42Z","lastTransitionTime":"2026-01-26T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.572055 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.572247 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.572274 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.672385 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:42 crc kubenswrapper[5121]: I0126 00:10:42.732465 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.773471 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.874623 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:42 crc kubenswrapper[5121]: E0126 00:10:42.975396 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.076234 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.176604 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.277629 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.378495 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.478605 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.578849 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.679954 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.780944 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.881396 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:43 crc kubenswrapper[5121]: E0126 00:10:43.982104 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.083050 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.183598 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.284865 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.385488 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.485985 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.586687 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.687845 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.788856 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.889840 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:44 crc kubenswrapper[5121]: E0126 00:10:44.990785 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.091402 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.192795 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: I0126 00:10:45.255461 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:45 crc kubenswrapper[5121]: I0126 00:10:45.256705 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:45 crc kubenswrapper[5121]: I0126 00:10:45.256834 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:45 crc kubenswrapper[5121]: I0126 00:10:45.256876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.257748 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.293454 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.394112 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.495236 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.595818 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.696799 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.797639 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.897873 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:45 crc kubenswrapper[5121]: E0126 00:10:45.998496 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.099001 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.200189 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.300999 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.402135 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.502581 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.603327 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.704245 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.805244 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: E0126 00:10:46.905665 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:46 crc kubenswrapper[5121]: I0126 00:10:46.909842 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.006072 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.106987 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.207163 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.308006 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.408447 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.508620 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.608974 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.709504 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.809964 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:47 crc kubenswrapper[5121]: E0126 00:10:47.910441 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.011031 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.111667 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.212743 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.313622 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.414272 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.514747 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.615131 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.716102 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.816401 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:48 crc kubenswrapper[5121]: E0126 00:10:48.916977 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.017403 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.042814 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.043430 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.044942 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.045044 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.045075 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.045922 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 26 00:10:49 crc kubenswrapper[5121]: I0126 00:10:49.046383 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.046738 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.117952 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.218519 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.319431 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.420232 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.520650 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.621323 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.721565 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.821809 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:49 crc kubenswrapper[5121]: E0126 00:10:49.922950 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.023827 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.124521 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.225177 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.326053 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.360509 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.426959 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.527673 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.628335 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.728857 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.829859 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:50 crc kubenswrapper[5121]: E0126 00:10:50.930110 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.030730 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.131223 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.232353 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.332837 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.433175 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.533935 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.634592 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.735134 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.835326 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:51 crc kubenswrapper[5121]: E0126 00:10:51.935426 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.035737 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.136531 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.237014 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.338519 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.439122 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.539700 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.640598 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.741823 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.803164 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.807839 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.807910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.807929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.807952 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.807969 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.825951 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.829369 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.829567 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.829722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.829942 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.830100 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.843381 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.848861 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.848904 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.848914 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.848929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.848939 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.861417 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.866251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.866296 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.866308 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.866327 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:52 crc kubenswrapper[5121]: I0126 00:10:52.866339 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:52Z","lastTransitionTime":"2026-01-26T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.879339 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9e67991f-e7b2-4959-86b5-516338602be4\\\",\\\"systemUUID\\\":\\\"30670804-6c22-4489-85ce-db46ce0b0480\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.879590 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.879618 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:52 crc kubenswrapper[5121]: E0126 00:10:52.980377 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.081283 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.182152 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.283503 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.384624 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.485478 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.586243 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.687799 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.788780 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.889235 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:53 crc kubenswrapper[5121]: E0126 00:10:53.990543 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.091067 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.191549 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.292225 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.392595 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.493451 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.593817 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.694242 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.795125 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.895973 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:54 crc kubenswrapper[5121]: E0126 00:10:54.996653 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.096946 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.197338 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.298567 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.399616 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.499896 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.600354 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.700842 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.801348 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:55 crc kubenswrapper[5121]: E0126 00:10:55.902324 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.002482 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.103101 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.203599 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.303879 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.404804 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.505508 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.606483 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.707548 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.808189 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:56 crc kubenswrapper[5121]: E0126 00:10:56.908312 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.008419 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.109142 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.209565 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.309913 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.410804 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.511071 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.612205 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.712812 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.813074 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:57 crc kubenswrapper[5121]: E0126 00:10:57.913651 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:58 crc kubenswrapper[5121]: E0126 00:10:58.014307 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.056088 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.091155 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.113009 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.116740 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.116812 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.116823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.116840 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.116870 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.206221 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.218923 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.219145 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.219212 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.219303 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.219437 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.304610 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.322792 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.323216 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.323511 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.323708 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.323942 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.404793 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.427079 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.427126 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.427136 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.427151 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.427162 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.530000 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.530049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.530062 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.530079 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.530092 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.633579 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.633637 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.633649 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.633666 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.633677 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.736790 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.737166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.737242 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.737344 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.737445 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.776156 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.840331 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.840623 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.840698 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.840807 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.840880 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.943747 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.943814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.943828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.943846 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.943857 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:58Z","lastTransitionTime":"2026-01-26T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.951087 5121 apiserver.go:52] "Watching apiserver" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.958564 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.959555 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-machine-config-operator/machine-config-daemon-9w6w9","openshift-multus/multus-additional-cni-plugins-jx85r","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-multus/multus-bhg6w","openshift-multus/network-metrics-daemon-2st6h","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm","openshift-ovn-kubernetes/ovnkube-node-7l6td","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-mgw5p","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-dns/node-resolver-zvvlx","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.960735 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.961276 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:58 crc kubenswrapper[5121]: E0126 00:10:58.961350 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.961913 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:58 crc kubenswrapper[5121]: E0126 00:10:58.962141 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.964159 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.964673 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.964812 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.965029 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.965059 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:10:58 crc kubenswrapper[5121]: E0126 00:10:58.965227 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.966800 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.967268 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.967390 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.967555 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.967569 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.967894 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.968276 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:10:58 crc kubenswrapper[5121]: I0126 00:10:58.985977 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.002111 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011375 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-system-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011443 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-multus\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011482 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-etc-kubernetes\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011719 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011753 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011921 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.011965 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012001 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-cnibin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012057 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012091 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012113 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012133 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-os-release\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012156 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-conf-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012180 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012200 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012220 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012241 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-socket-dir-parent\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.012268 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-k8s-cni-cncf-io\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.012738 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.013005 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.512945512 +0000 UTC m=+90.672146637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.014899 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-netns\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-hostroot\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015054 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015079 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-bin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015099 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-kubelet\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015119 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-multus-certs\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t99f2\" (UniqueName: \"kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015198 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015226 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015255 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.015285 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.015745 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.015962 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.51592351 +0000 UTC m=+90.675124625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.017714 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.028075 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.028298 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.028367 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.028491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.528469629 +0000 UTC m=+90.687670754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.033274 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.033612 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.033634 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.033646 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.033752 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.533732684 +0000 UTC m=+90.692933809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.045673 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.046199 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.046246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.046261 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.046280 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.046294 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.060551 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.074139 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116088 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-system-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116162 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-multus\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116180 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-etc-kubernetes\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-multus\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116279 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116375 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-system-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116418 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-etc-kubernetes\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116440 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-cnibin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116507 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-cnibin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116506 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116552 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116581 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-os-release\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116603 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-conf-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-conf-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116744 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-socket-dir-parent\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-k8s-cni-cncf-io\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116841 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-netns\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116863 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-hostroot\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.116867 5121 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: object "openshift-multus"/"cni-copy-resources" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116885 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116912 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-bin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.116940 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.616919163 +0000 UTC m=+90.776120288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : object "openshift-multus"/"cni-copy-resources" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116948 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-cni-bin\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116970 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-kubelet\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116990 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-k8s-cni-cncf-io\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.116996 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-multus-certs\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117024 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-multus-certs\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117030 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t99f2\" (UniqueName: \"kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117022 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-cni-dir\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117075 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-hostroot\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117080 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-run-netns\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117114 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-host-var-lib-kubelet\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.117126 5121 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: object "openshift-multus"/"multus-daemon-config" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.117171 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.617160821 +0000 UTC m=+90.776361946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : object "openshift-multus"/"multus-daemon-config" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117251 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-os-release\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.117302 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-socket-dir-parent\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.132387 5121 projected.go:289] Couldn't get configMap openshift-multus/kube-root-ca.crt: object "openshift-multus"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.132623 5121 projected.go:289] Couldn't get configMap openshift-multus/openshift-service-ca.crt: object "openshift-multus"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.132707 5121 projected.go:194] Error preparing data for projected volume kube-api-access-t99f2 for pod openshift-multus/multus-bhg6w: [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.132954 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2 podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:59.632918785 +0000 UTC m=+90.792119910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t99f2" (UniqueName: "kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.149363 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.149621 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.149697 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.149800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.149882 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.253049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.253101 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.253114 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.253130 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.253140 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.355795 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.355856 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.355868 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.355884 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.355896 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.458008 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.458068 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.458081 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.458102 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.458113 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.519970 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.520065 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.520115 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.520161 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.520195 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.520174457 +0000 UTC m=+91.679375582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.520212 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.520203528 +0000 UTC m=+91.679404653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.520399 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.548026 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.548127 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.560841 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.560907 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.560925 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.560947 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.560962 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.569707 5121 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577091 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577114 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577123 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577585 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.577961 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.593456 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 26 00:10:59 crc kubenswrapper[5121]: W0126 00:10:59.613274 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-a5011af30cb67880fe7317f3e745e408f5057a26b635be6de9cc8110fb2cd42f WatchSource:0}: Error finding container a5011af30cb67880fe7317f3e745e408f5057a26b635be6de9cc8110fb2cd42f: Status 404 returned error can't find the container with id a5011af30cb67880fe7317f3e745e408f5057a26b635be6de9cc8110fb2cd42f Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.621196 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.621355 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.621384 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.621395 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.621452 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.621430348 +0000 UTC m=+91.780631473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.621581 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.621724 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.621811 5121 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: object "openshift-multus"/"cni-copy-resources" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622048 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622110 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622137 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622078 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.622046047 +0000 UTC m=+91.781247172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : object "openshift-multus"/"cni-copy-resources" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.622217 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622268 5121 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: object "openshift-multus"/"multus-daemon-config" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622281 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.622241412 +0000 UTC m=+91.781442557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.622382 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.622369206 +0000 UTC m=+91.781570341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : object "openshift-multus"/"multus-daemon-config" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.663114 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.663154 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.663162 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.663177 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.663187 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.723170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t99f2\" (UniqueName: \"kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.723598 5121 projected.go:289] Couldn't get configMap openshift-multus/kube-root-ca.crt: object "openshift-multus"/"kube-root-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.723631 5121 projected.go:289] Couldn't get configMap openshift-multus/openshift-service-ca.crt: object "openshift-multus"/"openshift-service-ca.crt" not registered Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.723647 5121 projected.go:194] Error preparing data for projected volume kube-api-access-t99f2 for pod openshift-multus/multus-bhg6w: [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: E0126 00:10:59.723741 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2 podName:21d6bae8-c026-4b2f-9127-ca53977e50d8 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:00.72371018 +0000 UTC m=+91.882911305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t99f2" (UniqueName: "kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2") pod "multus-bhg6w" (UID: "21d6bae8-c026-4b2f-9127-ca53977e50d8") : [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.765170 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.765234 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.765250 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.765273 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.765289 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.847973 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.868337 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.868394 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.868406 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.868427 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.868446 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:10:59 crc kubenswrapper[5121]: W0126 00:10:59.923795 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-3392399f41c6b225ca88efa9239c14e91fd2e6a5d3ca9796837707ca388e3952 WatchSource:0}: Error finding container 3392399f41c6b225ca88efa9239c14e91fd2e6a5d3ca9796837707ca388e3952: Status 404 returned error can't find the container with id 3392399f41c6b225ca88efa9239c14e91fd2e6a5d3ca9796837707ca388e3952 Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.970715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.970772 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.970783 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.970796 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:10:59 crc kubenswrapper[5121]: I0126 00:10:59.970807 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:10:59Z","lastTransitionTime":"2026-01-26T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.073384 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.073473 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.073496 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.073522 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.073539 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.176906 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.177005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.177034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.177066 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.177090 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.271488 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.279483 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.279540 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.279554 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.279573 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.279586 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.283681 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.293532 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.294751 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.296312 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.296525 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.296327 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.298677 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.299029 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.311222 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.326457 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.340315 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.358405 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.373223 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bhg6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d6bae8-c026-4b2f-9127-ca53977e50d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t99f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhg6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.381986 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.382026 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.382036 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.382049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.382060 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.389281 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.400731 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.415754 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.423737 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.438428 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.449771 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6wm\" (UniqueName: \"kubernetes.io/projected/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-kube-api-access-5r6wm\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.449824 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.484211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.484253 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.484265 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.484281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.484292 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.551011 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.551162 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551238 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.551256 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551307 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.051286438 +0000 UTC m=+92.210487563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.551335 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6wm\" (UniqueName: \"kubernetes.io/projected/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-kube-api-access-5r6wm\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551359 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551414 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.551396671 +0000 UTC m=+93.710597816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551561 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.551650 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.551626268 +0000 UTC m=+93.710827393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.571609 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6wm\" (UniqueName: \"kubernetes.io/projected/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-kube-api-access-5r6wm\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.586590 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.586640 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.586650 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.586663 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.586672 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.651776 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.651854 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.652020 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652068 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652105 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652120 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652146 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652171 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652184 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.652220 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652245 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.65222615 +0000 UTC m=+93.811427275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.652351 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.652322142 +0000 UTC m=+93.811523337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.652864 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-cni-binary-copy\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.652870 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/21d6bae8-c026-4b2f-9127-ca53977e50d8-multus-daemon-config\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.689117 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.689157 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.689168 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.689183 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.689192 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.753289 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t99f2\" (UniqueName: \"kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.792297 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.792370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.792381 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.792401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.792414 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.847695 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t99f2\" (UniqueName: \"kubernetes.io/projected/21d6bae8-c026-4b2f-9127-ca53977e50d8-kube-api-access-t99f2\") pod \"multus-bhg6w\" (UID: \"21d6bae8-c026-4b2f-9127-ca53977e50d8\") " pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.894407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.894467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.894484 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.894506 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.894525 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:00Z","lastTransitionTime":"2026-01-26T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.909013 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bhg6w" Jan 26 00:11:00 crc kubenswrapper[5121]: W0126 00:11:00.923106 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d6bae8_c026_4b2f_9127_ca53977e50d8.slice/crio-777eac6c6bfa3230805e0acdb049b8ae698a67571649db1d1016edc7b6878f73 WatchSource:0}: Error finding container 777eac6c6bfa3230805e0acdb049b8ae698a67571649db1d1016edc7b6878f73: Status 404 returned error can't find the container with id 777eac6c6bfa3230805e0acdb049b8ae698a67571649db1d1016edc7b6878f73 Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.925014 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:00 crc kubenswrapper[5121]: E0126 00:11:00.925240 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.938410 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[5121]: I0126 00:11:00.952866 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000153 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000208 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000221 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000241 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000254 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.000842 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.011314 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.020501 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bhg6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d6bae8-c026-4b2f-9127-ca53977e50d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t99f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhg6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.028410 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2st6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r6wm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r6wm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2st6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.037395 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.047275 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.055552 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.055597 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8kn\" (UniqueName: \"kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.055648 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.055743 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62eaac02-ed09-4860-b496-07239e103d8d-rootfs\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.055768 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.055778 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.055817 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.055800903 +0000 UTC m=+93.215002018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.103055 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.103123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.103136 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.103157 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.103173 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.156996 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62eaac02-ed09-4860-b496-07239e103d8d-rootfs\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.157085 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.157223 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.157219 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62eaac02-ed09-4860-b496-07239e103d8d-rootfs\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.157273 5121 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.157265 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8kn\" (UniqueName: \"kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.157439 5121 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.157458 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config podName:62eaac02-ed09-4860-b496-07239e103d8d nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.657435675 +0000 UTC m=+92.816636800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config") pod "machine-config-daemon-9w6w9" (UID: "62eaac02-ed09-4860-b496-07239e103d8d") : object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.157535 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls podName:62eaac02-ed09-4860-b496-07239e103d8d nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.657505117 +0000 UTC m=+92.816706242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls") pod "machine-config-daemon-9w6w9" (UID: "62eaac02-ed09-4860-b496-07239e103d8d") : object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.174081 5121 projected.go:289] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.174127 5121 projected.go:289] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.174143 5121 projected.go:194] Error preparing data for projected volume kube-api-access-cs8kn for pod openshift-machine-config-operator/machine-config-daemon-9w6w9: [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.174228 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn podName:62eaac02-ed09-4860-b496-07239e103d8d nodeName:}" failed. No retries permitted until 2026-01-26 00:11:01.674199799 +0000 UTC m=+92.833400944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cs8kn" (UniqueName: "kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn") pod "machine-config-daemon-9w6w9" (UID: "62eaac02-ed09-4860-b496-07239e103d8d") : [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.205796 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.205851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.205865 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.205886 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.205900 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.308035 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.308083 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.308094 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.308111 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.308128 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.410520 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.411001 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.411329 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.412011 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.412220 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.487368 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.489744 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.490967 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.491585 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.491863 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.492026 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.516086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.516132 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.516150 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.516169 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.516181 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.524894 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.540096 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.550685 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.567032 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570079 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570132 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570161 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570190 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570226 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570272 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570311 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570356 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570384 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570411 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570475 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570530 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570625 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570665 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570737 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570843 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570904 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.570938 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.578427 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.593149 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.607123 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bhg6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21d6bae8-c026-4b2f-9127-ca53977e50d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t99f2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhg6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.618359 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.618403 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.618413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.618431 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.618441 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.624812 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2st6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r6wm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r6wm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2st6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.658250 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62eaac02-ed09-4860-b496-07239e103d8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs8kn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs8kn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:11:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9w6w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674104 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674193 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674218 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674240 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674314 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674322 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674362 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674417 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674434 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674522 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674538 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674566 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674582 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674473 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.674591 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674618 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674650 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674617 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.674678 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.174654823 +0000 UTC m=+93.333856028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.674696 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.674779 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.174740976 +0000 UTC m=+93.333942111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674804 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674826 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674847 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674871 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8kn\" (UniqueName: \"kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674892 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674913 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674955 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674975 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.674996 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675017 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675050 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675055 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675106 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.675118 5121 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.675158 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.175145938 +0000 UTC m=+93.334347153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675172 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675191 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675314 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675355 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.675392 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.675448 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.675479 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.175470177 +0000 UTC m=+93.334671312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.701626 5121 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.701665 5121 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.701692 5121 projected.go:194] Error preparing data for projected volume kube-api-access-4jbmj for pod openshift-ovn-kubernetes/ovnkube-node-7l6td: [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5121]: E0126 00:11:01.701768 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:02.201736001 +0000 UTC m=+93.360937126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4jbmj" (UniqueName: "kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.720389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.720426 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.720435 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.720448 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.720457 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.756242 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62eaac02-ed09-4860-b496-07239e103d8d-mcd-auth-proxy-config\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.763981 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8kn\" (UniqueName: \"kubernetes.io/projected/62eaac02-ed09-4860-b496-07239e103d8d-kube-api-access-cs8kn\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.765459 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62eaac02-ed09-4860-b496-07239e103d8d-proxy-tls\") pod \"machine-config-daemon-9w6w9\" (UID: \"62eaac02-ed09-4860-b496-07239e103d8d\") " pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.800971 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.822575 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.822624 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.822640 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.822658 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.822674 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.927984 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.928024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.928037 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.928056 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:01 crc kubenswrapper[5121]: I0126 00:11:01.928067 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:01Z","lastTransitionTime":"2026-01-26T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:01 crc kubenswrapper[5121]: W0126 00:11:01.930296 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62eaac02_ed09_4860_b496_07239e103d8d.slice/crio-54c84b7b32ae1bac7ef5de68bec1ff349acfb1607f72f3520173b527a6341f63 WatchSource:0}: Error finding container 54c84b7b32ae1bac7ef5de68bec1ff349acfb1607f72f3520173b527a6341f63: Status 404 returned error can't find the container with id 54c84b7b32ae1bac7ef5de68bec1ff349acfb1607f72f3520173b527a6341f63 Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.030695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.030741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.030751 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.030791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.030801 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.079641 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.080366 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.080646 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:04.080605586 +0000 UTC m=+95.239806731 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.132988 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.133416 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.133504 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.133596 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.133673 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.180356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.180433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.180489 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.180545 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180623 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180653 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180694 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.180672493 +0000 UTC m=+94.339873618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180724 5121 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180835 5121 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.180858 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.180812937 +0000 UTC m=+94.340014102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovnkube-script-lib" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.181000 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.180970932 +0000 UTC m=+94.340172227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"ovn-node-metrics-cert" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.181024 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.181014163 +0000 UTC m=+94.340215498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : object "openshift-ovn-kubernetes"/"env-overrides" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.236152 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.236226 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.236246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.236303 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.236321 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.281817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.282203 5121 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.282232 5121 projected.go:289] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.282245 5121 projected.go:194] Error preparing data for projected volume kube-api-access-4jbmj for pod openshift-ovn-kubernetes/ovnkube-node-7l6td: [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.282336 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj podName:c13c9422-5f83-40d0-bb0f-3055101ae2ba nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.282311816 +0000 UTC m=+94.441512941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4jbmj" (UniqueName: "kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj") pod "ovnkube-node-7l6td" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba") : [object "openshift-ovn-kubernetes"/"kube-root-ca.crt" not registered, object "openshift-ovn-kubernetes"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.358822 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.358868 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.358878 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.358893 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.358902 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.461987 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.462046 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.462058 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.462077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.462091 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.564468 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.564528 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.564542 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.564563 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.564575 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.585189 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.585393 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.585436 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.585560 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:06.585531014 +0000 UTC m=+97.744732159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.585614 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.585697 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:06.585672588 +0000 UTC m=+97.744873723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.666813 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.666948 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.666967 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.666982 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.666992 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.686415 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686624 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686655 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686727 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686820 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686851 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.686660 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686876 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686853 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:06.686827336 +0000 UTC m=+97.846028471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: E0126 00:11:02.686981 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:06.6869624 +0000 UTC m=+97.846163535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.769021 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.769064 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.769080 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.769096 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.769106 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.872472 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.872543 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.872555 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.872575 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.872586 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.886622 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.890781 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.891194 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.891262 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.892035 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.892488 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.892731 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.893285 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.975908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.975945 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.975956 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.975968 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.975977 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:02Z","lastTransitionTime":"2026-01-26T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.990739 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfp76\" (UniqueName: \"kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.990845 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b37035eb-d0d7-460d-98de-b7bc2acd8c39-host\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:02 crc kubenswrapper[5121]: I0126 00:11:02.990906 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.080923 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.081340 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.081356 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.081371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.081380 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.091368 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.091456 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp76\" (UniqueName: \"kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.091510 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b37035eb-d0d7-460d-98de-b7bc2acd8c39-host\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.091577 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b37035eb-d0d7-460d-98de-b7bc2acd8c39-host\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.091636 5121 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.091674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca podName:b37035eb-d0d7-460d-98de-b7bc2acd8c39 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.591661136 +0000 UTC m=+94.750862261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca") pod "node-ca-mgw5p" (UID: "b37035eb-d0d7-460d-98de-b7bc2acd8c39") : object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.109438 5121 projected.go:289] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.109481 5121 projected.go:289] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.109499 5121 projected.go:194] Error preparing data for projected volume kube-api-access-jfp76 for pod openshift-image-registry/node-ca-mgw5p: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.109577 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76 podName:b37035eb-d0d7-460d-98de-b7bc2acd8c39 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:03.609551853 +0000 UTC m=+94.768752998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jfp76" (UniqueName: "kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76") pod "node-ca-mgw5p" (UID: "b37035eb-d0d7-460d-98de-b7bc2acd8c39") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.149836 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.149874 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.149891 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.149905 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.149914 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:11:03Z","lastTransitionTime":"2026-01-26T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.192452 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.192609 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.192658 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.192720 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.193343 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.193706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.193868 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.292908 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.296971 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.376879 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.390074 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.595510 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.595977 5121 configmap.go:193] Couldn't get configMap openshift-image-registry/image-registry-certificates: object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.596120 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca podName:b37035eb-d0d7-460d-98de-b7bc2acd8c39 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:04.596086449 +0000 UTC m=+95.755287614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca") pod "node-ca-mgw5p" (UID: "b37035eb-d0d7-460d-98de-b7bc2acd8c39") : object "openshift-image-registry"/"image-registry-certificates" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.696053 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp76\" (UniqueName: \"kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.696402 5121 projected.go:289] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.696460 5121 projected.go:289] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.696485 5121 projected.go:194] Error preparing data for projected volume kube-api-access-jfp76 for pod openshift-image-registry/node-ca-mgw5p: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:11:03 crc kubenswrapper[5121]: E0126 00:11:03.696662 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76 podName:b37035eb-d0d7-460d-98de-b7bc2acd8c39 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:04.696625269 +0000 UTC m=+95.855826394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jfp76" (UniqueName: "kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76") pod "node-ca-mgw5p" (UID: "b37035eb-d0d7-460d-98de-b7bc2acd8c39") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.853296 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"ovnkube-node-7l6td\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.958670 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.962345 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.962444 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.962978 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:11:03 crc kubenswrapper[5121]: I0126 00:11:03.963488 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.101873 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125078 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-binary-copy\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125151 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cnibin\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125183 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125244 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125676 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125739 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-system-cni-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.125819 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125844 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-os-release\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.125874 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:08.125857288 +0000 UTC m=+99.285058413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125921 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq8j\" (UniqueName: \"kubernetes.io/projected/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-kube-api-access-zdq8j\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.125987 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226455 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-binary-copy\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226515 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cnibin\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226535 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226554 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226598 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-system-cni-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226615 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-os-release\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226631 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdq8j\" (UniqueName: \"kubernetes.io/projected/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-kube-api-access-zdq8j\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.226686 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.226753 5121 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.226827 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist podName:43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc nodeName:}" failed. No retries permitted until 2026-01-26 00:11:04.72681127 +0000 UTC m=+95.886012395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-jx85r" (UID: "43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc") : object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.227277 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-system-cni-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.227341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cnibin\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.227380 5121 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-flatfile-config: object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.227409 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap podName:43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc nodeName:}" failed. No retries permitted until 2026-01-26 00:11:04.727401738 +0000 UTC m=+95.886602863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whereabouts-flatfile-configmap" (UniqueName: "kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap") pod "multus-additional-cni-plugins-jx85r" (UID: "43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc") : object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.227468 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.227707 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-os-release\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.228524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-binary-copy\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.346844 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdq8j\" (UniqueName: \"kubernetes.io/projected/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-kube-api-access-zdq8j\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.636563 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.638298 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b37035eb-d0d7-460d-98de-b7bc2acd8c39-serviceca\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.737711 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.737836 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp76\" (UniqueName: \"kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.737869 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.737967 5121 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-flatfile-config: object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.738027 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap podName:43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc nodeName:}" failed. No retries permitted until 2026-01-26 00:11:05.738012052 +0000 UTC m=+96.897213177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "whereabouts-flatfile-configmap" (UniqueName: "kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap") pod "multus-additional-cni-plugins-jx85r" (UID: "43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc") : object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.738036 5121 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: E0126 00:11:04.738193 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist podName:43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc nodeName:}" failed. No retries permitted until 2026-01-26 00:11:05.738159237 +0000 UTC m=+96.897360362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-jx85r" (UID: "43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc") : object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.777669 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp76\" (UniqueName: \"kubernetes.io/projected/b37035eb-d0d7-460d-98de-b7bc2acd8c39-kube-api-access-jfp76\") pod \"node-ca-mgw5p\" (UID: \"b37035eb-d0d7-460d-98de-b7bc2acd8c39\") " pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:04 crc kubenswrapper[5121]: I0126 00:11:04.876136 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mgw5p" Jan 26 00:11:04 crc kubenswrapper[5121]: W0126 00:11:04.890556 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb37035eb_d0d7_460d_98de_b7bc2acd8c39.slice/crio-479b90b27237b8fa0746013e6d4cc6c4048e6d0510a2a7bc714df5e28414346b WatchSource:0}: Error finding container 479b90b27237b8fa0746013e6d4cc6c4048e6d0510a2a7bc714df5e28414346b: Status 404 returned error can't find the container with id 479b90b27237b8fa0746013e6d4cc6c4048e6d0510a2a7bc714df5e28414346b Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.285002 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.289328 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.289393 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.291712 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.342750 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbf2j\" (UniqueName: \"kubernetes.io/projected/20cbcf10-39de-420a-ac45-e8228cf2fa65-kube-api-access-hbf2j\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.342870 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20cbcf10-39de-420a-ac45-e8228cf2fa65-tmp-dir\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.342963 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20cbcf10-39de-420a-ac45-e8228cf2fa65-hosts-file\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.409298 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"3392399f41c6b225ca88efa9239c14e91fd2e6a5d3ca9796837707ca388e3952"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.411568 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.411693 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.411619 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.411654 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.411585 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.412407 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.412529 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.412608 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.412665 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.413035 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.416964 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.417294 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.417456 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.417734 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.418020 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.478914 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbf2j\" (UniqueName: \"kubernetes.io/projected/20cbcf10-39de-420a-ac45-e8228cf2fa65-kube-api-access-hbf2j\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479017 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20cbcf10-39de-420a-ac45-e8228cf2fa65-tmp-dir\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479050 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479083 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twmw\" (UniqueName: \"kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479111 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479133 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.479997 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20cbcf10-39de-420a-ac45-e8228cf2fa65-hosts-file\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.486826 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20cbcf10-39de-420a-ac45-e8228cf2fa65-tmp-dir\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.487440 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.488580 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.488939 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/20cbcf10-39de-420a-ac45-e8228cf2fa65-hosts-file\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.494644 5121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500722 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"a5011af30cb67880fe7317f3e745e408f5057a26b635be6de9cc8110fb2cd42f"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"82a388015d8e6244801c06b323bdeaff4ab9a66449e6d622998b630dd55f7fdc"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500789 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f2c70c9fa81b525e278e34e37929842e1a594dec97b19f5db8e8728f936eecd5"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500804 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"ea8ae77b2458b83b63f9050f8b95bc16834ed5d6563826bfcfd5a5b24c31dbdc"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhg6w" event={"ID":"21d6bae8-c026-4b2f-9127-ca53977e50d8","Type":"ContainerStarted","Data":"777eac6c6bfa3230805e0acdb049b8ae698a67571649db1d1016edc7b6878f73"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500823 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"54c84b7b32ae1bac7ef5de68bec1ff349acfb1607f72f3520173b527a6341f63"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500839 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhg6w" event={"ID":"21d6bae8-c026-4b2f-9127-ca53977e50d8","Type":"ContainerStarted","Data":"8beee9011422d1e0a77b616cac7a91b910641fa0f718b4aef38907b73f462cf9"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500852 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"7b5541cbee6f4b93c96d7abb2d6b41119bc6cc2de7a940af0a248ff8cd825692"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500865 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"7f02aa8ab6740f6cdf5e0536f5661b5f7e67bd30343de05c40644acd4b1d091e"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500880 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500893 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mgw5p" event={"ID":"b37035eb-d0d7-460d-98de-b7bc2acd8c39","Type":"ContainerStarted","Data":"479b90b27237b8fa0746013e6d4cc6c4048e6d0510a2a7bc714df5e28414346b"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500905 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"4b3042b7a032b3a0e3b3faac4e6fbbef998f675f18719b7f1810c845ca04a994"} Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.500928 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz"] Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.505520 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbf2j\" (UniqueName: \"kubernetes.io/projected/20cbcf10-39de-420a-ac45-e8228cf2fa65-kube-api-access-hbf2j\") pod \"node-resolver-zvvlx\" (UID: \"20cbcf10-39de-420a-ac45-e8228cf2fa65\") " pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.569175 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.569152944 podStartE2EDuration="7.569152944s" podCreationTimestamp="2026-01-26 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.565486336 +0000 UTC m=+96.724687471" watchObservedRunningTime="2026-01-26 00:11:05.569152944 +0000 UTC m=+96.728354069" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.572733 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.580231 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.580551 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.580696 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.580895 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581265 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581296 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581321 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581347 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581368 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581390 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581412 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581438 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581464 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581486 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581510 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581535 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581559 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581584 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581604 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581629 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581651 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581676 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581700 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581729 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581777 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581802 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581826 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581849 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581876 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581901 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581950 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581975 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.581998 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582025 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582047 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582072 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582093 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582113 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582134 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582158 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582263 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582289 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582312 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582379 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582408 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582430 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582451 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582473 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582498 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582520 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582542 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582607 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582632 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582655 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582702 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582724 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582811 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582836 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582861 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582890 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582914 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582939 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582960 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.582982 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583006 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583028 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583050 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583072 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583095 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583124 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583148 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583177 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583200 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583224 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583248 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583277 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583301 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583327 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583353 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583375 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583406 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583428 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583458 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583482 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583505 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583529 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583590 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583639 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583663 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583687 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583712 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583785 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583812 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583837 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583886 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.583922 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.584959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.584994 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585015 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585038 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585077 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585095 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585132 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585168 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585187 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585213 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585239 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585257 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585275 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585296 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585313 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585336 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585353 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585370 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585414 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585436 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585453 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585472 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585489 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585514 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585534 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585557 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585578 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585600 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585625 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585642 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585670 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585704 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585730 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585805 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585824 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585845 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585871 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585898 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585945 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585966 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.585988 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586009 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586027 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586045 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586062 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586079 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586099 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586116 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586134 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586172 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586195 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586218 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586236 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586255 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586274 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586313 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586334 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586355 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586374 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586403 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586427 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586456 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586475 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586494 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586518 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586536 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586555 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586577 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586603 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586621 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586641 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586660 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586681 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586701 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586720 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.586738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587123 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587152 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587210 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587232 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587272 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587291 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587310 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587372 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587391 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587411 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587440 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587469 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587498 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587525 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587555 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587642 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587662 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587686 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587715 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587741 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587785 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587808 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587830 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587911 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.587981 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588002 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588031 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588061 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588091 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588111 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588130 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588155 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588184 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588208 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588228 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588248 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588272 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588581 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588616 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588638 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.588669 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.589184 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.589231 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7twmw\" (UniqueName: \"kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.589267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.589295 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.590050 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.592164 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.592864 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.596698 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.597487 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.598882 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.599139 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.600490 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.601137 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.601619 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.602365 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.611619 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.612061 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.613061 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.627810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.628350 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.628397 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.628522 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.628861 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.630163 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zvvlx" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.698239 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.698909 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.699136 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.700094 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.700632 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701038 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701369 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701610 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701631 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701875 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.702688 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.702989 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.701056 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.704680 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.705244 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.705391 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.705873 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.706083 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.706174 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.706401 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.707719 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.708941 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.710528 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.711273 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.714925 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.715545 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.716215 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.716740 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.717434 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.718133 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.718661 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.746908 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.747311 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.747742 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.748923 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.749078 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.749131 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.750083 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.750525 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.752242 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.752580 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.752822 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753196 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753349 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82dd5953-67da-467c-be7b-5338dd79b8f6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753415 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753432 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753487 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dd5953-67da-467c-be7b-5338dd79b8f6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82dd5953-67da-467c-be7b-5338dd79b8f6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753639 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753952 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753978 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.753982 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.754544 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.754630 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755048 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755065 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755078 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755088 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755100 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755110 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755121 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755132 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755144 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755154 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755164 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755175 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755187 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755198 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755209 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755221 5121 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755233 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755247 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755257 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755269 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755281 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755292 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755304 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755314 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755324 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755337 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755351 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755362 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755375 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755387 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755398 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755409 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755418 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755429 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755441 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755458 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755478 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755489 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755500 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755511 5121 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755522 5121 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755536 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755546 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755557 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755567 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755578 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755589 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755599 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755609 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755619 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755629 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755641 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755652 5121 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755662 5121 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755674 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755684 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755695 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.755680 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.756652 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.757140 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.757674 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.758160 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.758617 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.759308 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.765076 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.766145 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.766291 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.766672 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.766665 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.767013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.768139 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.769042 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.769315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.769457 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: E0126 00:11:05.769873 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:06.269842883 +0000 UTC m=+97.429044008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.770045 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.770198 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.770362 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.771220 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.771618 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.771951 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.772713 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20cbcf10_39de_420a_ac45_e8228cf2fa65.slice/crio-7bd34c6d84ee90ba468a2610c3704e3b6763fd6ee26686f189299906242d3617 WatchSource:0}: Error finding container 7bd34c6d84ee90ba468a2610c3704e3b6763fd6ee26686f189299906242d3617: Status 404 returned error can't find the container with id 7bd34c6d84ee90ba468a2610c3704e3b6763fd6ee26686f189299906242d3617 Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.773318 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.773634 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.774199 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.774235 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.774340 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.774710 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.776149 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.776523 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.776810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.777064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.777313 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.779097 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twmw\" (UniqueName: \"kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw\") pod \"ovnkube-control-plane-57b78d8988-2hvlm\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.820335 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.821559 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.822286 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.822555 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.826691 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.846145 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.846738 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.846861 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jx85r\" (UID: \"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc\") " pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.847234 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.847392 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.847520 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.848626 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.849079 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.867235 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.867669 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.868156 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.868508 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.868708 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.868921 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.869132 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.869550 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.869828 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.869999 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.870157 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.870556 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.871343 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.872727 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.873275 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.874518 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.876157 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.876881 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877661 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=7.877633857 podStartE2EDuration="7.877633857s" podCreationTimestamp="2026-01-26 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.772031798 +0000 UTC m=+96.931232923" watchObservedRunningTime="2026-01-26 00:11:05.877633857 +0000 UTC m=+97.036834992" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82dd5953-67da-467c-be7b-5338dd79b8f6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877867 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877888 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877923 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dd5953-67da-467c-be7b-5338dd79b8f6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.877982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82dd5953-67da-467c-be7b-5338dd79b8f6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878070 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878082 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878094 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878107 5121 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878117 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878128 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878138 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878150 5121 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878160 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878171 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878182 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878194 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878205 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878216 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878228 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878241 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878255 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878266 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878276 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878285 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878295 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878304 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878315 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878324 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878333 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878342 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878353 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878363 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878373 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878383 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878393 5121 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878402 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878412 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878421 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878431 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878443 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878452 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878549 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878561 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878571 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878581 5121 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878591 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878601 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878611 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878623 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878633 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878643 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878655 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878664 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878674 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878686 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878697 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878707 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878717 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878727 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878737 5121 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878746 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878774 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878784 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878793 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878805 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878813 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878823 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878835 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878846 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878856 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878629 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.878923 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.880021 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.880051 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/82dd5953-67da-467c-be7b-5338dd79b8f6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.880976 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.881228 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.881355 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.881375 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.881719 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.881735 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.882318 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.882496 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.919786 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82dd5953-67da-467c-be7b-5338dd79b8f6-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.920671 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/82dd5953-67da-467c-be7b-5338dd79b8f6-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.921207 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.923607 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.924296 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.924781 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.925445 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.927662 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jx85r" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.938501 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=7.938477429 podStartE2EDuration="7.938477429s" podCreationTimestamp="2026-01-26 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.923855448 +0000 UTC m=+97.083056583" watchObservedRunningTime="2026-01-26 00:11:05.938477429 +0000 UTC m=+97.097678544" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.938618 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.938615253 podStartE2EDuration="7.938615253s" podCreationTimestamp="2026-01-26 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.938442508 +0000 UTC m=+97.097643633" watchObservedRunningTime="2026-01-26 00:11:05.938615253 +0000 UTC m=+97.097816378" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.939270 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82dd5953-67da-467c-be7b-5338dd79b8f6-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-5xrbz\" (UID: \"82dd5953-67da-467c-be7b-5338dd79b8f6\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.941861 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43d2b4e3_b2cd_4a61_9204_8630bfd2fcfc.slice/crio-947a3194db9362706682e0a9223b1135c43b02a36a9a837ea57bd6ee6c9eae97 WatchSource:0}: Error finding container 947a3194db9362706682e0a9223b1135c43b02a36a9a837ea57bd6ee6c9eae97: Status 404 returned error can't find the container with id 947a3194db9362706682e0a9223b1135c43b02a36a9a837ea57bd6ee6c9eae97 Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.959592 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.959582 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960146 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960460 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960459 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960622 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960663 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960742 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.960876 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.961012 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.961074 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.961965 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962418 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962459 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962562 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962744 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962848 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.962905 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.963131 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.963909 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.963923 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.964072 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.964633 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.964824 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.965977 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.966450 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.967077 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.967359 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.968132 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.970246 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.970315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.970662 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.971268 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.971560 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.972106 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.972426 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.972420 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.973409 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.973497 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.973498 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.973655 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.974303 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.974469 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.975027 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.976252 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.975040 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.975146 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.976361 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.976385 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.976508 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.976491 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.977621 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.979155 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.979319 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.979634 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bhg6w" podStartSLOduration=69.979585969 podStartE2EDuration="1m9.979585969s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:05.974565831 +0000 UTC m=+97.133766956" watchObservedRunningTime="2026-01-26 00:11:05.979585969 +0000 UTC m=+97.138787114" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.979899 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.980369 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.980526 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.981240 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.981190 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.981880 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.982050 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.982468 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.983213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.983530 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.983715 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.983883 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.984135 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.984147 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.984557 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.986278 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.986877 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.987322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.987474 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.987416 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.987792 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988368 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988426 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988435 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes/kubernetes.io~empty-dir/ca-trust-extracted-pem Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988455 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988523 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988575 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988630 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes/kubernetes.io~projected/bound-sa-token Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988661 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988714 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988663 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988741 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes/kubernetes.io~projected/kube-api-access-zg8nc Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988890 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988839 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes/kubernetes.io~configmap/etcd-serving-ca Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988929 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988845 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes/kubernetes.io~secret/serving-cert Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.988961 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.988980 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes/kubernetes.io~configmap/mcd-auth-proxy-config Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989004 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989015 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 26 00:11:05 crc kubenswrapper[5121]: W0126 00:11:05.989134 5121 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes/kubernetes.io~projected/kube-api-access-hckvg Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989160 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989329 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989347 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989359 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989369 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989379 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989389 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989400 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989410 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989420 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989431 5121 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989445 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989456 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989467 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989478 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989488 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989498 5121 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989517 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989530 5121 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989541 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989551 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989562 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989572 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989588 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989604 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989614 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989628 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989645 5121 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989665 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989682 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989693 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989706 5121 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989621 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989719 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:05 crc kubenswrapper[5121]: I0126 00:11:05.989833 5121 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989861 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989878 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989895 5121 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989911 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989928 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989945 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.989970 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990002 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990035 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990060 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990077 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990109 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990131 5121 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990172 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990205 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990241 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990279 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990303 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990316 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990339 5121 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990354 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990376 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990402 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990425 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990441 5121 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990464 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990491 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990506 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990518 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.990544 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992253 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992279 5121 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992297 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992312 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992327 5121 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992341 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992357 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992372 5121 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992388 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992401 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992414 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992439 5121 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992464 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992484 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992511 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992528 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992542 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992562 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992579 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992592 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992606 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992618 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992631 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992645 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992665 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992685 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.992698 5121 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.994682 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:05.996792 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.001162 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.005226 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.054895 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:11:06 crc kubenswrapper[5121]: W0126 00:11:06.083014 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda042b0d8_0b7b_4790_a026_e24e2f1426ae.slice/crio-94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6 WatchSource:0}: Error finding container 94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6: Status 404 returned error can't find the container with id 94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6 Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.093958 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.093985 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.093996 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.094007 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.094019 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.125964 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.192078 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.194425 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.195832 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.195855 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.223095 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" event={"ID":"82dd5953-67da-467c-be7b-5338dd79b8f6","Type":"ContainerStarted","Data":"b9ddd848e656a5b3087f32ba2e52e62fe91add5a2e0d860c9ba1c82cb47fb46d"} Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.224371 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerStarted","Data":"94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6"} Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.225356 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"947a3194db9362706682e0a9223b1135c43b02a36a9a837ea57bd6ee6c9eae97"} Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.229923 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zvvlx" event={"ID":"20cbcf10-39de-420a-ac45-e8228cf2fa65","Type":"ContainerStarted","Data":"7bd34c6d84ee90ba468a2610c3704e3b6763fd6ee26686f189299906242d3617"} Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.323214 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.323422 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:07.323390912 +0000 UTC m=+98.482592037 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.333722 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.334862 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.421156 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.422888 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.425283 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.447812 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.451062 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.452502 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.453897 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.454832 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.456491 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.458039 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.458670 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.460586 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.461652 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.462556 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.463828 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.465014 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.466058 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.467821 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.469336 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.513353 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.514178 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.515871 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.517036 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.518607 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.520084 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.521390 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.523771 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.524957 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.527004 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.528509 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.530267 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.532803 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.533858 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.536529 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.551485 5121 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.551674 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.555853 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.557094 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.560852 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.562813 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.563552 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.568728 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.570309 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.571265 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.573187 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.578503 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.627479 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.627603 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.628511 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.628579 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:14.628563088 +0000 UTC m=+105.787764213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.629748 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.629820 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:14.629805984 +0000 UTC m=+105.789007109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.659662 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.708557 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.717044 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.719691 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.721991 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.723967 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.725555 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.726941 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.727866 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.728716 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.728787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728918 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728940 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728951 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728959 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728997 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:14.728980804 +0000 UTC m=+105.888181929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.728999 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.729015 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:06 crc kubenswrapper[5121]: E0126 00:11:06.729076 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:14.729060497 +0000 UTC m=+105.888261622 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:06 crc kubenswrapper[5121]: I0126 00:11:06.729273 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.234502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"f0c5ff0e2e0c32b79fbfa612b15af02232a52c728c6559cc1084a86a679a2573"} Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.237939 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" exitCode=0 Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.238033 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.241695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mgw5p" event={"ID":"b37035eb-d0d7-460d-98de-b7bc2acd8c39","Type":"ContainerStarted","Data":"edc013acc9f6a6bc350c9634b3561fce846ceb223b5e1e0af3989e45e8a712f8"} Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.254841 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:07 crc kubenswrapper[5121]: E0126 00:11:07.254967 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.255364 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:07 crc kubenswrapper[5121]: E0126 00:11:07.255427 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.255476 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:07 crc kubenswrapper[5121]: E0126 00:11:07.255527 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.255583 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:07 crc kubenswrapper[5121]: E0126 00:11:07.255637 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.386855 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:07 crc kubenswrapper[5121]: E0126 00:11:07.387166 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:09.387142273 +0000 UTC m=+100.546343398 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.499656 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podStartSLOduration=71.499618705 podStartE2EDuration="1m11.499618705s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:07.423780842 +0000 UTC m=+98.582981967" watchObservedRunningTime="2026-01-26 00:11:07.499618705 +0000 UTC m=+98.658819830" Jan 26 00:11:07 crc kubenswrapper[5121]: I0126 00:11:07.515204 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mgw5p" podStartSLOduration=71.515183623 podStartE2EDuration="1m11.515183623s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:07.513131453 +0000 UTC m=+98.672332598" watchObservedRunningTime="2026-01-26 00:11:07.515183623 +0000 UTC m=+98.674384748" Jan 26 00:11:08 crc kubenswrapper[5121]: I0126 00:11:08.187201 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:08 crc kubenswrapper[5121]: E0126 00:11:08.187742 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:08 crc kubenswrapper[5121]: E0126 00:11:08.187811 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:16.187795148 +0000 UTC m=+107.346996273 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.308365 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.308369 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.308474 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.308588 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:09 crc kubenswrapper[5121]: E0126 00:11:09.308883 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:09 crc kubenswrapper[5121]: E0126 00:11:09.309004 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:09 crc kubenswrapper[5121]: E0126 00:11:09.309107 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:09 crc kubenswrapper[5121]: E0126 00:11:09.309100 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.315623 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" event={"ID":"82dd5953-67da-467c-be7b-5338dd79b8f6","Type":"ContainerStarted","Data":"62962637c751cc1ba0679ec034f34e043210d7c59628ee511b58fb24c54730f0"} Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.317301 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerStarted","Data":"c2f6c1d726e6ebd73f2b63b399de8f4f6ec7ef40be7ae7ffde7cd8dca5f021d7"} Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.318610 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"ad6e004cfdae5ef031f8f38c1c9de86593d7c811ff4567912f6d88f5e01e3751"} Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.321299 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.323387 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zvvlx" event={"ID":"20cbcf10-39de-420a-ac45-e8228cf2fa65","Type":"ContainerStarted","Data":"457f13b5d6e05b851f27788f8e98ec46b35970530a29b8b8cd2fb25d8558c2e2"} Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.460614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:09 crc kubenswrapper[5121]: E0126 00:11:09.460705 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:13.460685477 +0000 UTC m=+104.619886792 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:09 crc kubenswrapper[5121]: I0126 00:11:09.461608 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-5xrbz" podStartSLOduration=73.461595884 podStartE2EDuration="1m13.461595884s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:09.379871627 +0000 UTC m=+100.539072762" watchObservedRunningTime="2026-01-26 00:11:09.461595884 +0000 UTC m=+100.620797029" Jan 26 00:11:10 crc kubenswrapper[5121]: I0126 00:11:10.376265 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerStarted","Data":"d252159539b6aa936348da8f7545cfcc9b6f0803a26ced328848eb5eb54e106b"} Jan 26 00:11:10 crc kubenswrapper[5121]: I0126 00:11:10.383961 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:11:10 crc kubenswrapper[5121]: I0126 00:11:10.440924 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zvvlx" podStartSLOduration=74.440898848 podStartE2EDuration="1m14.440898848s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:09.476723469 +0000 UTC m=+100.635924614" watchObservedRunningTime="2026-01-26 00:11:10.440898848 +0000 UTC m=+101.600099973" Jan 26 00:11:11 crc kubenswrapper[5121]: I0126 00:11:11.255468 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:11 crc kubenswrapper[5121]: E0126 00:11:11.256318 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:11 crc kubenswrapper[5121]: I0126 00:11:11.255514 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:11 crc kubenswrapper[5121]: E0126 00:11:11.256473 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:11 crc kubenswrapper[5121]: I0126 00:11:11.255532 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:11 crc kubenswrapper[5121]: E0126 00:11:11.256536 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:11 crc kubenswrapper[5121]: I0126 00:11:11.255474 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:11 crc kubenswrapper[5121]: E0126 00:11:11.256586 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:11 crc kubenswrapper[5121]: I0126 00:11:11.391355 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} Jan 26 00:11:12 crc kubenswrapper[5121]: I0126 00:11:12.396047 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="ad6e004cfdae5ef031f8f38c1c9de86593d7c811ff4567912f6d88f5e01e3751" exitCode=0 Jan 26 00:11:12 crc kubenswrapper[5121]: I0126 00:11:12.396137 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"ad6e004cfdae5ef031f8f38c1c9de86593d7c811ff4567912f6d88f5e01e3751"} Jan 26 00:11:12 crc kubenswrapper[5121]: I0126 00:11:12.417547 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} Jan 26 00:11:12 crc kubenswrapper[5121]: I0126 00:11:12.417594 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} Jan 26 00:11:12 crc kubenswrapper[5121]: I0126 00:11:12.434195 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" podStartSLOduration=75.434176978 podStartE2EDuration="1m15.434176978s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:10.442523456 +0000 UTC m=+101.601724601" watchObservedRunningTime="2026-01-26 00:11:12.434176978 +0000 UTC m=+103.593378103" Jan 26 00:11:13 crc kubenswrapper[5121]: I0126 00:11:13.255072 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:13 crc kubenswrapper[5121]: E0126 00:11:13.255191 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:13 crc kubenswrapper[5121]: I0126 00:11:13.255497 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:13 crc kubenswrapper[5121]: E0126 00:11:13.255550 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:13 crc kubenswrapper[5121]: I0126 00:11:13.255588 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:13 crc kubenswrapper[5121]: E0126 00:11:13.255631 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:13 crc kubenswrapper[5121]: I0126 00:11:13.255667 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:13 crc kubenswrapper[5121]: E0126 00:11:13.255712 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:13 crc kubenswrapper[5121]: I0126 00:11:13.556601 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:13 crc kubenswrapper[5121]: E0126 00:11:13.556948 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:21.556916957 +0000 UTC m=+112.716118082 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:14 crc kubenswrapper[5121]: I0126 00:11:14.437495 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} Jan 26 00:11:14 crc kubenswrapper[5121]: I0126 00:11:14.669533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:14 crc kubenswrapper[5121]: I0126 00:11:14.669664 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.669716 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.669735 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.669802 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.669786325 +0000 UTC m=+121.828987450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.669815 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.669809275 +0000 UTC m=+121.829010400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: I0126 00:11:14.770332 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:14 crc kubenswrapper[5121]: I0126 00:11:14.770440 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770586 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770608 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770617 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770629 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770633 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770642 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770706 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.770686106 +0000 UTC m=+121.929887251 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:14 crc kubenswrapper[5121]: E0126 00:11:14.770727 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:11:30.770719027 +0000 UTC m=+121.929920162 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:15 crc kubenswrapper[5121]: I0126 00:11:15.255357 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:15 crc kubenswrapper[5121]: I0126 00:11:15.255423 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:15 crc kubenswrapper[5121]: E0126 00:11:15.255650 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:15 crc kubenswrapper[5121]: I0126 00:11:15.255755 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:15 crc kubenswrapper[5121]: E0126 00:11:15.256057 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:15 crc kubenswrapper[5121]: I0126 00:11:15.256167 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:15 crc kubenswrapper[5121]: E0126 00:11:15.256311 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:15 crc kubenswrapper[5121]: E0126 00:11:15.256492 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:16 crc kubenswrapper[5121]: I0126 00:11:16.217746 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:16 crc kubenswrapper[5121]: E0126 00:11:16.217908 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:16 crc kubenswrapper[5121]: E0126 00:11:16.217973 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:11:32.217953678 +0000 UTC m=+123.377154803 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:17 crc kubenswrapper[5121]: I0126 00:11:17.255963 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:17 crc kubenswrapper[5121]: I0126 00:11:17.256026 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:17 crc kubenswrapper[5121]: I0126 00:11:17.256207 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:17 crc kubenswrapper[5121]: E0126 00:11:17.256236 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:17 crc kubenswrapper[5121]: E0126 00:11:17.256331 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:17 crc kubenswrapper[5121]: E0126 00:11:17.256585 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:17 crc kubenswrapper[5121]: I0126 00:11:17.256638 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:17 crc kubenswrapper[5121]: E0126 00:11:17.257032 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:17 crc kubenswrapper[5121]: I0126 00:11:17.450801 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"06aa58731df41f0c0f64a228a777fe67df1c17f12d71781bf1fd57d31b91e3e8"} Jan 26 00:11:18 crc kubenswrapper[5121]: I0126 00:11:18.492046 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} Jan 26 00:11:19 crc kubenswrapper[5121]: I0126 00:11:19.255084 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:19 crc kubenswrapper[5121]: I0126 00:11:19.255218 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:19 crc kubenswrapper[5121]: I0126 00:11:19.255084 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:19 crc kubenswrapper[5121]: E0126 00:11:19.255381 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:19 crc kubenswrapper[5121]: E0126 00:11:19.255865 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:19 crc kubenswrapper[5121]: E0126 00:11:19.256015 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:19 crc kubenswrapper[5121]: I0126 00:11:19.256085 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:19 crc kubenswrapper[5121]: I0126 00:11:19.256119 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:11:19 crc kubenswrapper[5121]: E0126 00:11:19.256239 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:19 crc kubenswrapper[5121]: E0126 00:11:19.256326 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.254995 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.255038 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:21 crc kubenswrapper[5121]: E0126 00:11:21.255148 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.255288 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:21 crc kubenswrapper[5121]: E0126 00:11:21.255379 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.255511 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:21 crc kubenswrapper[5121]: E0126 00:11:21.256019 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:21 crc kubenswrapper[5121]: E0126 00:11:21.256090 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.508575 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="06aa58731df41f0c0f64a228a777fe67df1c17f12d71781bf1fd57d31b91e3e8" exitCode=0 Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.508673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"06aa58731df41f0c0f64a228a777fe67df1c17f12d71781bf1fd57d31b91e3e8"} Jan 26 00:11:21 crc kubenswrapper[5121]: I0126 00:11:21.587231 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:21 crc kubenswrapper[5121]: E0126 00:11:21.587391 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:37.587354446 +0000 UTC m=+128.746555571 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.523589 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerStarted","Data":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.524210 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.524232 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.524243 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.555976 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.564407 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:22 crc kubenswrapper[5121]: I0126 00:11:22.627155 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podStartSLOduration=86.627130772 podStartE2EDuration="1m26.627130772s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:22.587692511 +0000 UTC m=+113.746893636" watchObservedRunningTime="2026-01-26 00:11:22.627130772 +0000 UTC m=+113.786331897" Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.255605 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:23 crc kubenswrapper[5121]: E0126 00:11:23.255734 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.256063 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:23 crc kubenswrapper[5121]: E0126 00:11:23.256111 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.256152 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:23 crc kubenswrapper[5121]: E0126 00:11:23.256194 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.256231 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:23 crc kubenswrapper[5121]: E0126 00:11:23.256271 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.753366 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="63b69bf198d6cedd8a18211b04fb9bd5c39083e8f3c90d559a09cb27601e9f3a" exitCode=0 Jan 26 00:11:23 crc kubenswrapper[5121]: I0126 00:11:23.753463 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"63b69bf198d6cedd8a18211b04fb9bd5c39083e8f3c90d559a09cb27601e9f3a"} Jan 26 00:11:24 crc kubenswrapper[5121]: I0126 00:11:24.760196 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"440088ed677dc8748d54b746c70a7cd0208fc9265b1f0434513712c59d7e5de7"} Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.255725 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:25 crc kubenswrapper[5121]: E0126 00:11:25.255862 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.255866 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.255938 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:25 crc kubenswrapper[5121]: E0126 00:11:25.256048 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:25 crc kubenswrapper[5121]: E0126 00:11:25.256133 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.256168 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:25 crc kubenswrapper[5121]: E0126 00:11:25.256237 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.792070 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="440088ed677dc8748d54b746c70a7cd0208fc9265b1f0434513712c59d7e5de7" exitCode=0 Jan 26 00:11:25 crc kubenswrapper[5121]: I0126 00:11:25.792158 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"440088ed677dc8748d54b746c70a7cd0208fc9265b1f0434513712c59d7e5de7"} Jan 26 00:11:27 crc kubenswrapper[5121]: I0126 00:11:27.255850 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:27 crc kubenswrapper[5121]: E0126 00:11:27.256247 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:27 crc kubenswrapper[5121]: I0126 00:11:27.255893 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:27 crc kubenswrapper[5121]: E0126 00:11:27.256327 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:27 crc kubenswrapper[5121]: I0126 00:11:27.255927 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:27 crc kubenswrapper[5121]: I0126 00:11:27.255874 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:27 crc kubenswrapper[5121]: E0126 00:11:27.256422 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:27 crc kubenswrapper[5121]: E0126 00:11:27.256509 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:28 crc kubenswrapper[5121]: I0126 00:11:28.847357 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"6b2c25ce281671a50fba2e04bbeef548b241645e46b429a9ecea2fbb9e5bd9cd"} Jan 26 00:11:29 crc kubenswrapper[5121]: I0126 00:11:29.297947 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:29 crc kubenswrapper[5121]: I0126 00:11:29.297994 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:29 crc kubenswrapper[5121]: E0126 00:11:29.298073 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:29 crc kubenswrapper[5121]: I0126 00:11:29.298078 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:29 crc kubenswrapper[5121]: E0126 00:11:29.298141 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:29 crc kubenswrapper[5121]: E0126 00:11:29.298230 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:29 crc kubenswrapper[5121]: I0126 00:11:29.298277 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:29 crc kubenswrapper[5121]: E0126 00:11:29.298371 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.227240 5121 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.258274 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.402873 5121 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.636075 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2st6h"] Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.636233 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.636349 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.737253 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.737333 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.737502 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.737524 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.737614 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.737584357 +0000 UTC m=+153.896785472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.737650 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.737638298 +0000 UTC m=+153.896839513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.837864 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.838369 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.838569 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.838605 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.838624 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.838690 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.838669313 +0000 UTC m=+153.997870438 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.839460 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.839480 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.839491 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:30 crc kubenswrapper[5121]: E0126 00:11:30.839527 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.839516058 +0000 UTC m=+153.998717183 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.857785 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.859638 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.860161 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:30 crc kubenswrapper[5121]: I0126 00:11:30.995845 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=32.995827591 podStartE2EDuration="32.995827591s" podCreationTimestamp="2026-01-26 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:30.995732498 +0000 UTC m=+122.154933643" watchObservedRunningTime="2026-01-26 00:11:30.995827591 +0000 UTC m=+122.155028716" Jan 26 00:11:31 crc kubenswrapper[5121]: I0126 00:11:31.413832 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:31 crc kubenswrapper[5121]: E0126 00:11:31.413992 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:31 crc kubenswrapper[5121]: I0126 00:11:31.413832 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:31 crc kubenswrapper[5121]: I0126 00:11:31.414235 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:31 crc kubenswrapper[5121]: E0126 00:11:31.414321 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:31 crc kubenswrapper[5121]: E0126 00:11:31.414545 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:32 crc kubenswrapper[5121]: I0126 00:11:32.230177 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:32 crc kubenswrapper[5121]: E0126 00:11:32.230345 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:32 crc kubenswrapper[5121]: E0126 00:11:32.230417 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs podName:e2c23c20-cf98-42ae-b5fb-5bbde2b0740c nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.230400252 +0000 UTC m=+155.389601367 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs") pod "network-metrics-daemon-2st6h" (UID: "e2c23c20-cf98-42ae-b5fb-5bbde2b0740c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:11:32 crc kubenswrapper[5121]: I0126 00:11:32.257901 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:32 crc kubenswrapper[5121]: E0126 00:11:32.258032 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:32 crc kubenswrapper[5121]: I0126 00:11:32.869148 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="6b2c25ce281671a50fba2e04bbeef548b241645e46b429a9ecea2fbb9e5bd9cd" exitCode=0 Jan 26 00:11:32 crc kubenswrapper[5121]: I0126 00:11:32.869234 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"6b2c25ce281671a50fba2e04bbeef548b241645e46b429a9ecea2fbb9e5bd9cd"} Jan 26 00:11:33 crc kubenswrapper[5121]: I0126 00:11:33.255115 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:33 crc kubenswrapper[5121]: I0126 00:11:33.255173 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:33 crc kubenswrapper[5121]: I0126 00:11:33.255131 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:33 crc kubenswrapper[5121]: E0126 00:11:33.255265 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:33 crc kubenswrapper[5121]: E0126 00:11:33.255319 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:33 crc kubenswrapper[5121]: E0126 00:11:33.255392 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:34 crc kubenswrapper[5121]: I0126 00:11:34.256204 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:34 crc kubenswrapper[5121]: E0126 00:11:34.256870 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:34 crc kubenswrapper[5121]: I0126 00:11:34.885914 5121 generic.go:358] "Generic (PLEG): container finished" podID="43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc" containerID="2f749e859517a85849c0db693e5a4de527bd8b9ff7f4eb736d2f574f155c18ab" exitCode=0 Jan 26 00:11:34 crc kubenswrapper[5121]: I0126 00:11:34.885996 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerDied","Data":"2f749e859517a85849c0db693e5a4de527bd8b9ff7f4eb736d2f574f155c18ab"} Jan 26 00:11:35 crc kubenswrapper[5121]: I0126 00:11:35.255355 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:35 crc kubenswrapper[5121]: E0126 00:11:35.255517 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:35 crc kubenswrapper[5121]: I0126 00:11:35.255604 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:35 crc kubenswrapper[5121]: I0126 00:11:35.255719 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:35 crc kubenswrapper[5121]: E0126 00:11:35.255905 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:35 crc kubenswrapper[5121]: E0126 00:11:35.256077 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:35 crc kubenswrapper[5121]: E0126 00:11:35.404538 5121 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 00:11:35 crc kubenswrapper[5121]: I0126 00:11:35.894739 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jx85r" event={"ID":"43d2b4e3-b2cd-4a61-9204-8630bfd2fcfc","Type":"ContainerStarted","Data":"025c4673ce8111c6ad23c9245ac5142159286ce1ec0a800d0f15928e3776f4e9"} Jan 26 00:11:36 crc kubenswrapper[5121]: I0126 00:11:36.260118 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:36 crc kubenswrapper[5121]: E0126 00:11:36.260254 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:37 crc kubenswrapper[5121]: I0126 00:11:37.255070 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:37 crc kubenswrapper[5121]: I0126 00:11:37.255148 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:37 crc kubenswrapper[5121]: I0126 00:11:37.255108 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:37 crc kubenswrapper[5121]: E0126 00:11:37.255216 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:37 crc kubenswrapper[5121]: E0126 00:11:37.255296 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:37 crc kubenswrapper[5121]: E0126 00:11:37.255385 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:37 crc kubenswrapper[5121]: I0126 00:11:37.601230 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:37 crc kubenswrapper[5121]: E0126 00:11:37.601721 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.601686134 +0000 UTC m=+160.760887279 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:38 crc kubenswrapper[5121]: I0126 00:11:38.255079 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:38 crc kubenswrapper[5121]: E0126 00:11:38.255216 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:39 crc kubenswrapper[5121]: I0126 00:11:39.255417 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:39 crc kubenswrapper[5121]: I0126 00:11:39.255464 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:39 crc kubenswrapper[5121]: I0126 00:11:39.255465 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:39 crc kubenswrapper[5121]: E0126 00:11:39.255570 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 26 00:11:39 crc kubenswrapper[5121]: E0126 00:11:39.255657 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 26 00:11:39 crc kubenswrapper[5121]: E0126 00:11:39.255737 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 26 00:11:40 crc kubenswrapper[5121]: I0126 00:11:40.263930 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:40 crc kubenswrapper[5121]: E0126 00:11:40.264060 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2st6h" podUID="e2c23c20-cf98-42ae-b5fb-5bbde2b0740c" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.255921 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.255940 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.256184 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.259214 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.259242 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.259251 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.260549 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.870302 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:41 crc kubenswrapper[5121]: I0126 00:11:41.898777 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jx85r" podStartSLOduration=105.898737287 podStartE2EDuration="1m45.898737287s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:35.95265349 +0000 UTC m=+127.111854655" watchObservedRunningTime="2026-01-26 00:11:41.898737287 +0000 UTC m=+133.057938412" Jan 26 00:11:42 crc kubenswrapper[5121]: I0126 00:11:42.260017 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:11:42 crc kubenswrapper[5121]: I0126 00:11:42.262432 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:11:42 crc kubenswrapper[5121]: I0126 00:11:42.263729 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:11:44 crc kubenswrapper[5121]: I0126 00:11:44.266393 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 26 00:11:44 crc kubenswrapper[5121]: I0126 00:11:44.304793 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.183287 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-g5dxr"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.183883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.187395 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.188596 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.189037 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.189378 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.189504 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.190689 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.190887 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.209034 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.210141 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211064 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211098 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211242 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211409 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.210753 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211492 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.211715 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.210722 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.215465 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.215501 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.218066 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.218083 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.220162 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.220259 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.306474 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.311307 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.311517 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.311683 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.311931 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.312142 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.315577 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.316689 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29489760-n6btg"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321300 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321374 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zww\" (UniqueName: \"kubernetes.io/projected/194e9801-7419-4afa-b8f8-f0845d720283-kube-api-access-r6zww\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321404 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4508d-2f14-4dda-9f09-05c8ad70670b-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321432 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194e9801-7419-4afa-b8f8-f0845d720283-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321455 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194e9801-7419-4afa-b8f8-f0845d720283-config\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321490 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22c4508d-2f14-4dda-9f09-05c8ad70670b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321644 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-oauth-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321748 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4508d-2f14-4dda-9f09-05c8ad70670b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321823 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321854 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvz7j\" (UniqueName: \"kubernetes.io/projected/22c4508d-2f14-4dda-9f09-05c8ad70670b-kube-api-access-xvz7j\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-service-ca\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321921 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qklj\" (UniqueName: \"kubernetes.io/projected/85c879f7-5fe1-44b3-94ca-dd368a14be73-kube-api-access-6qklj\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321966 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-oauth-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.321998 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-trusted-ca-bundle\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422411 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422471 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-auth-proxy-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422499 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zww\" (UniqueName: \"kubernetes.io/projected/194e9801-7419-4afa-b8f8-f0845d720283-kube-api-access-r6zww\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422520 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4508d-2f14-4dda-9f09-05c8ad70670b-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422550 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422574 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194e9801-7419-4afa-b8f8-f0845d720283-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422592 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194e9801-7419-4afa-b8f8-f0845d720283-config\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422612 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22c4508d-2f14-4dda-9f09-05c8ad70670b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-oauth-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422655 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2bq\" (UniqueName: \"kubernetes.io/projected/069690ff-331e-4ee8-bed5-24d79f939a40-kube-api-access-vp2bq\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422681 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4508d-2f14-4dda-9f09-05c8ad70670b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422712 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422733 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvz7j\" (UniqueName: \"kubernetes.io/projected/22c4508d-2f14-4dda-9f09-05c8ad70670b-kube-api-access-xvz7j\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422753 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-service-ca\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422786 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qklj\" (UniqueName: \"kubernetes.io/projected/85c879f7-5fe1-44b3-94ca-dd368a14be73-kube-api-access-6qklj\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-oauth-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422833 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069690ff-331e-4ee8-bed5-24d79f939a40-machine-approver-tls\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.422862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-trusted-ca-bundle\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.423727 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-service-ca\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.423929 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-oauth-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.424393 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.425595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4508d-2f14-4dda-9f09-05c8ad70670b-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.426339 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c879f7-5fe1-44b3-94ca-dd368a14be73-trusted-ca-bundle\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.427130 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194e9801-7419-4afa-b8f8-f0845d720283-config\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.430983 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-oauth-config\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.433224 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22c4508d-2f14-4dda-9f09-05c8ad70670b-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.435488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194e9801-7419-4afa-b8f8-f0845d720283-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.436387 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/85c879f7-5fe1-44b3-94ca-dd368a14be73-console-serving-cert\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.436396 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4508d-2f14-4dda-9f09-05c8ad70670b-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.443385 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qklj\" (UniqueName: \"kubernetes.io/projected/85c879f7-5fe1-44b3-94ca-dd368a14be73-kube-api-access-6qklj\") pod \"console-64d44f6ddf-g5dxr\" (UID: \"85c879f7-5fe1-44b3-94ca-dd368a14be73\") " pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.443956 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zww\" (UniqueName: \"kubernetes.io/projected/194e9801-7419-4afa-b8f8-f0845d720283-kube-api-access-r6zww\") pod \"openshift-apiserver-operator-846cbfc458-zxxq5\" (UID: \"194e9801-7419-4afa-b8f8-f0845d720283\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.447936 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvz7j\" (UniqueName: \"kubernetes.io/projected/22c4508d-2f14-4dda-9f09-05c8ad70670b-kube-api-access-xvz7j\") pod \"openshift-controller-manager-operator-686468bdd5-ngcw5\" (UID: \"22c4508d-2f14-4dda-9f09-05c8ad70670b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.501504 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.501688 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.504705 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.504732 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.516639 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.516748 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.519712 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.519949 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520186 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520496 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520641 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520735 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6ztm9"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520971 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520808 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.520854 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.523541 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2bq\" (UniqueName: \"kubernetes.io/projected/069690ff-331e-4ee8-bed5-24d79f939a40-kube-api-access-vp2bq\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.523616 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069690ff-331e-4ee8-bed5-24d79f939a40-machine-approver-tls\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.523665 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-auth-proxy-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.523698 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.524336 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.524960 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/069690ff-331e-4ee8-bed5-24d79f939a40-auth-proxy-config\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.525089 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.526572 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-prnb4"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.526733 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.527560 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.530249 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/069690ff-331e-4ee8-bed5-24d79f939a40-machine-approver-tls\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.531884 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.532021 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.534877 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.535834 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.535951 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.536091 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.536493 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.536635 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.536861 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.537243 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.540637 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-579cz"] Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.540723 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.540858 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.540964 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.542278 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.543503 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.543623 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.543691 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.543845 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.543893 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544069 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544101 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544205 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544258 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544336 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544245 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544467 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544657 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.544755 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.550503 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.552075 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.552271 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.555178 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2bq\" (UniqueName: \"kubernetes.io/projected/069690ff-331e-4ee8-bed5-24d79f939a40-kube-api-access-vp2bq\") pod \"machine-approver-54c688565-9rgbz\" (UID: \"069690ff-331e-4ee8-bed5-24d79f939a40\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.557018 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.557254 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.559949 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.567950 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624674 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624719 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63d0a3c7-ad3c-4556-b95a-7e1143caca62-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624742 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9s6\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-kube-api-access-tg9s6\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624779 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-images\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624841 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624877 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624905 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcdh9\" (UniqueName: \"kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624930 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624954 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-tmp\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.624985 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625020 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625053 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625072 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625090 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-client\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625107 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-serving-cert\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625132 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625146 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-encryption-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625160 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit-dir\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625203 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625234 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625256 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625298 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625323 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-image-import-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625358 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/316af2c1-6a7f-4000-8926-49a441b4f1cc-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625472 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625516 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625559 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625584 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625604 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnnjr\" (UniqueName: \"kubernetes.io/projected/316af2c1-6a7f-4000-8926-49a441b4f1cc-kube-api-access-nnnjr\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625630 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625650 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625670 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsvsr\" (UniqueName: \"kubernetes.io/projected/6e039297-dc55-4c6b-b76e-d2b83365ca3d-kube-api-access-rsvsr\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625695 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47l46\" (UniqueName: \"kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625718 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625740 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625835 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhnw\" (UniqueName: \"kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625859 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63d0a3c7-ad3c-4556-b95a-7e1143caca62-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625889 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625912 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625957 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.625980 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.627350 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727437 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727490 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-encryption-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727512 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit-dir\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727575 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727598 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727623 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727645 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-image-import-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/316af2c1-6a7f-4000-8926-49a441b4f1cc-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727688 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727720 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727797 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727825 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727847 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnnjr\" (UniqueName: \"kubernetes.io/projected/316af2c1-6a7f-4000-8926-49a441b4f1cc-kube-api-access-nnnjr\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727871 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727889 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727912 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsvsr\" (UniqueName: \"kubernetes.io/projected/6e039297-dc55-4c6b-b76e-d2b83365ca3d-kube-api-access-rsvsr\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727937 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47l46\" (UniqueName: \"kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727963 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.727984 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjhnw\" (UniqueName: \"kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63d0a3c7-ad3c-4556-b95a-7e1143caca62-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728059 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728085 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728106 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728130 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728152 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728198 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63d0a3c7-ad3c-4556-b95a-7e1143caca62-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728222 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9s6\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-kube-api-access-tg9s6\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728243 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-images\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728281 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728302 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728322 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcdh9\" (UniqueName: \"kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728348 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728373 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-tmp\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728394 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728434 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728491 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728517 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-client\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.728544 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-serving-cert\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.730425 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.731101 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.733340 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.733862 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.734689 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.734812 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/316af2c1-6a7f-4000-8926-49a441b4f1cc-images\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.734907 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.736050 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.730518 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63d0a3c7-ad3c-4556-b95a-7e1143caca62-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.736934 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.737450 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.737494 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/316af2c1-6a7f-4000-8926-49a441b4f1cc-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.737601 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.737847 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-tmp\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.738438 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.739262 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.739350 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-node-pullsecrets\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.740120 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.742783 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit-dir\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.743073 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.743942 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.743984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-image-import-ca\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.745108 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/63d0a3c7-ad3c-4556-b95a-7e1143caca62-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.745837 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6e039297-dc55-4c6b-b76e-d2b83365ca3d-audit\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.747082 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.748190 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-etcd-client\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.748444 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.749651 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-encryption-config\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.755315 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.756860 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.759159 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63d0a3c7-ad3c-4556-b95a-7e1143caca62-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.760054 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjhnw\" (UniqueName: \"kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.760148 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.761160 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e039297-dc55-4c6b-b76e-d2b83365ca3d-serving-cert\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.761207 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.768441 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnnjr\" (UniqueName: \"kubernetes.io/projected/316af2c1-6a7f-4000-8926-49a441b4f1cc-kube-api-access-nnnjr\") pod \"machine-config-operator-67c9d58cbb-vbxd4\" (UID: \"316af2c1-6a7f-4000-8926-49a441b4f1cc\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.772373 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-6ztm9\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.773783 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.774290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsvsr\" (UniqueName: \"kubernetes.io/projected/6e039297-dc55-4c6b-b76e-d2b83365ca3d-kube-api-access-rsvsr\") pod \"apiserver-9ddfb9f55-prnb4\" (UID: \"6e039297-dc55-4c6b-b76e-d2b83365ca3d\") " pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.774676 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcdh9\" (UniqueName: \"kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9\") pod \"route-controller-manager-776cdc94d6-rqsvg\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.775437 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47l46\" (UniqueName: \"kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46\") pod \"image-pruner-29489760-n6btg\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.775969 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9s6\" (UniqueName: \"kubernetes.io/projected/63d0a3c7-ad3c-4556-b95a-7e1143caca62-kube-api-access-tg9s6\") pod \"cluster-image-registry-operator-86c45576b9-htdxn\" (UID: \"63d0a3c7-ad3c-4556-b95a-7e1143caca62\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.820204 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.861434 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.874520 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.884315 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.898242 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:11:46 crc kubenswrapper[5121]: I0126 00:11:46.908180 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.017570 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.018893 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.024502 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.024819 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.025178 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.025371 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.025655 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.025749 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.025906 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.026521 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.033269 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.041580 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-jxx48"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.041909 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.042328 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.044862 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.044961 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.046379 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.046552 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.048280 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.052641 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.052964 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.053392 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.053558 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.054193 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.060938 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-ljq2k"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.062380 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.066114 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-nhmff"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.067363 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.070378 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4whj5"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.070452 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.071596 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.072108 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.073688 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.073982 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.077824 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078078 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078143 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078367 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078530 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078544 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.078726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.079146 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.079557 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.079917 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.084579 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xzxxt"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.084865 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.096621 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.102470 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.107835 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.110080 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.117225 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.117440 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.117657 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.117976 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140051 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-serving-cert\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140130 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-trusted-ca-bundle\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140153 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swl2z\" (UniqueName: \"kubernetes.io/projected/85bedc20-2632-45f3-bfac-d20d34024cb3-kube-api-access-swl2z\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140193 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7358b62-8abe-4d72-ae2d-29f96ed81902-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140221 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-policies\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140235 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msct7\" (UniqueName: \"kubernetes.io/projected/75e2dc1c-f659-4dc2-a18d-141f468e666a-kube-api-access-msct7\") pod \"downloads-747b44746d-jxx48\" (UID: \"75e2dc1c-f659-4dc2-a18d-141f468e666a\") " pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140250 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/85bedc20-2632-45f3-bfac-d20d34024cb3-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140289 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-encryption-config\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140310 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8bv\" (UniqueName: \"kubernetes.io/projected/cfaf2a6d-872e-498c-bffd-089932c74e19-kube-api-access-cd8bv\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7358b62-8abe-4d72-ae2d-29f96ed81902-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140345 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7358b62-8abe-4d72-ae2d-29f96ed81902-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140406 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-client\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140444 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7358b62-8abe-4d72-ae2d-29f96ed81902-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140510 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-dir\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.140723 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-serving-ca\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242193 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-images\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242244 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7358b62-8abe-4d72-ae2d-29f96ed81902-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242269 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaefaab3-bc8a-4e99-8114-7b929c835941-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242296 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-dir\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242335 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da119f5-ef9e-41d0-adef-a5e261563611-available-featuregates\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242363 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-serving-ca\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242390 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da119f5-ef9e-41d0-adef-a5e261563611-serving-cert\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242413 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nks9p\" (UniqueName: \"kubernetes.io/projected/9da119f5-ef9e-41d0-adef-a5e261563611-kube-api-access-nks9p\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242440 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaefaab3-bc8a-4e99-8114-7b929c835941-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242460 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-serving-cert\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242492 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-trusted-ca-bundle\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242509 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swl2z\" (UniqueName: \"kubernetes.io/projected/85bedc20-2632-45f3-bfac-d20d34024cb3-kube-api-access-swl2z\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242540 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaefaab3-bc8a-4e99-8114-7b929c835941-config\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242590 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7358b62-8abe-4d72-ae2d-29f96ed81902-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242614 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-policies\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242633 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-msct7\" (UniqueName: \"kubernetes.io/projected/75e2dc1c-f659-4dc2-a18d-141f468e666a-kube-api-access-msct7\") pod \"downloads-747b44746d-jxx48\" (UID: \"75e2dc1c-f659-4dc2-a18d-141f468e666a\") " pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242648 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/85bedc20-2632-45f3-bfac-d20d34024cb3-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242665 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d947e22-7d64-4bc5-a715-e95485fa0c57-serving-cert\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242680 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-trusted-ca\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242711 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-config\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaefaab3-bc8a-4e99-8114-7b929c835941-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242750 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-encryption-config\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242796 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8bv\" (UniqueName: \"kubernetes.io/projected/cfaf2a6d-872e-498c-bffd-089932c74e19-kube-api-access-cd8bv\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242818 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7358b62-8abe-4d72-ae2d-29f96ed81902-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242844 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7358b62-8abe-4d72-ae2d-29f96ed81902-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242872 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92dj\" (UniqueName: \"kubernetes.io/projected/dfeddd81-f3cd-485c-8637-053e6d8cec00-kube-api-access-b92dj\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-config\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242906 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeddd81-f3cd-485c-8637-053e6d8cec00-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242921 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9prt\" (UniqueName: \"kubernetes.io/projected/1d947e22-7d64-4bc5-a715-e95485fa0c57-kube-api-access-d9prt\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.242937 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-client\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.245843 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7358b62-8abe-4d72-ae2d-29f96ed81902-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.245913 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-dir\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.250434 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-serving-ca\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.251328 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-trusted-ca-bundle\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.252272 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cfaf2a6d-872e-498c-bffd-089932c74e19-audit-policies\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.252890 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7358b62-8abe-4d72-ae2d-29f96ed81902-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.253575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-etcd-client\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.257942 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/85bedc20-2632-45f3-bfac-d20d34024cb3-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.273955 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-encryption-config\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.274388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7358b62-8abe-4d72-ae2d-29f96ed81902-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.276161 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfaf2a6d-872e-498c-bffd-089932c74e19-serving-cert\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.278228 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swl2z\" (UniqueName: \"kubernetes.io/projected/85bedc20-2632-45f3-bfac-d20d34024cb3-kube-api-access-swl2z\") pod \"cluster-samples-operator-6b564684c8-mgsgw\" (UID: \"85bedc20-2632-45f3-bfac-d20d34024cb3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.278897 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-msct7\" (UniqueName: \"kubernetes.io/projected/75e2dc1c-f659-4dc2-a18d-141f468e666a-kube-api-access-msct7\") pod \"downloads-747b44746d-jxx48\" (UID: \"75e2dc1c-f659-4dc2-a18d-141f468e666a\") " pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.279722 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7358b62-8abe-4d72-ae2d-29f96ed81902-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nbmc7\" (UID: \"a7358b62-8abe-4d72-ae2d-29f96ed81902\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.294979 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8bv\" (UniqueName: \"kubernetes.io/projected/cfaf2a6d-872e-498c-bffd-089932c74e19-kube-api-access-cd8bv\") pod \"apiserver-8596bd845d-579cz\" (UID: \"cfaf2a6d-872e-498c-bffd-089932c74e19\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: W0126 00:11:47.301307 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac9c212_b298_468b_a465_d924254ae8ab.slice/crio-1e403c59a1e2df36be4af5cfdf22f25a04dca6b1b904d7949058bafaf42eda04 WatchSource:0}: Error finding container 1e403c59a1e2df36be4af5cfdf22f25a04dca6b1b904d7949058bafaf42eda04: Status 404 returned error can't find the container with id 1e403c59a1e2df36be4af5cfdf22f25a04dca6b1b904d7949058bafaf42eda04 Jan 26 00:11:47 crc kubenswrapper[5121]: W0126 00:11:47.302780 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387d3abf_783f_4184_81db_2fa8fa54ffc8.slice/crio-c4d4a744af559ff847df5e8610e7dadd3c81c46c4370be0f7fcf526e6800c541 WatchSource:0}: Error finding container c4d4a744af559ff847df5e8610e7dadd3c81c46c4370be0f7fcf526e6800c541: Status 404 returned error can't find the container with id c4d4a744af559ff847df5e8610e7dadd3c81c46c4370be0f7fcf526e6800c541 Jan 26 00:11:47 crc kubenswrapper[5121]: W0126 00:11:47.319730 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d0a3c7_ad3c_4556_b95a_7e1143caca62.slice/crio-a93fb51b35ad1952efe9d7e483937ba6007f3b8f10f6558d6e55773c600b7802 WatchSource:0}: Error finding container a93fb51b35ad1952efe9d7e483937ba6007f3b8f10f6558d6e55773c600b7802: Status 404 returned error can't find the container with id a93fb51b35ad1952efe9d7e483937ba6007f3b8f10f6558d6e55773c600b7802 Jan 26 00:11:47 crc kubenswrapper[5121]: W0126 00:11:47.321040 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod316af2c1_6a7f_4000_8926_49a441b4f1cc.slice/crio-d0df0da3c299eb5cffa2cc1b5fc883ae91d634431413d44929b93733d0723755 WatchSource:0}: Error finding container d0df0da3c299eb5cffa2cc1b5fc883ae91d634431413d44929b93733d0723755: Status 404 returned error can't find the container with id d0df0da3c299eb5cffa2cc1b5fc883ae91d634431413d44929b93733d0723755 Jan 26 00:11:47 crc kubenswrapper[5121]: W0126 00:11:47.326893 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e039297_dc55_4c6b_b76e_d2b83365ca3d.slice/crio-dd01485cad6dad22eb754b3ed0f203712e745c0c010da5c371fe4d56965bcde1 WatchSource:0}: Error finding container dd01485cad6dad22eb754b3ed0f203712e745c0c010da5c371fe4d56965bcde1: Status 404 returned error can't find the container with id dd01485cad6dad22eb754b3ed0f203712e745c0c010da5c371fe4d56965bcde1 Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.343834 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-config\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.343879 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaefaab3-bc8a-4e99-8114-7b929c835941-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.343910 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b92dj\" (UniqueName: \"kubernetes.io/projected/dfeddd81-f3cd-485c-8637-053e6d8cec00-kube-api-access-b92dj\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.343924 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-config\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.343941 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeddd81-f3cd-485c-8637-053e6d8cec00-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344223 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9prt\" (UniqueName: \"kubernetes.io/projected/1d947e22-7d64-4bc5-a715-e95485fa0c57-kube-api-access-d9prt\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344300 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-images\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344350 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaefaab3-bc8a-4e99-8114-7b929c835941-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344421 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da119f5-ef9e-41d0-adef-a5e261563611-available-featuregates\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344474 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da119f5-ef9e-41d0-adef-a5e261563611-serving-cert\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344497 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nks9p\" (UniqueName: \"kubernetes.io/projected/9da119f5-ef9e-41d0-adef-a5e261563611-kube-api-access-nks9p\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344537 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaefaab3-bc8a-4e99-8114-7b929c835941-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344608 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaefaab3-bc8a-4e99-8114-7b929c835941-config\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d947e22-7d64-4bc5-a715-e95485fa0c57-serving-cert\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.344690 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-trusted-ca\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.345067 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-config\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.345234 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeddd81-f3cd-485c-8637-053e6d8cec00-images\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.345749 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaefaab3-bc8a-4e99-8114-7b929c835941-config\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.345975 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-config\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.346048 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaefaab3-bc8a-4e99-8114-7b929c835941-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.346325 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da119f5-ef9e-41d0-adef-a5e261563611-available-featuregates\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.346368 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d947e22-7d64-4bc5-a715-e95485fa0c57-trusted-ca\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.357820 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da119f5-ef9e-41d0-adef-a5e261563611-serving-cert\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.357873 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeddd81-f3cd-485c-8637-053e6d8cec00-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.358427 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d947e22-7d64-4bc5-a715-e95485fa0c57-serving-cert\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.360209 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaefaab3-bc8a-4e99-8114-7b929c835941-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.362706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nks9p\" (UniqueName: \"kubernetes.io/projected/9da119f5-ef9e-41d0-adef-a5e261563611-kube-api-access-nks9p\") pod \"openshift-config-operator-5777786469-ljq2k\" (UID: \"9da119f5-ef9e-41d0-adef-a5e261563611\") " pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.363903 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aaefaab3-bc8a-4e99-8114-7b929c835941-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-x9ptc\" (UID: \"aaefaab3-bc8a-4e99-8114-7b929c835941\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.364011 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92dj\" (UniqueName: \"kubernetes.io/projected/dfeddd81-f3cd-485c-8637-053e6d8cec00-kube-api-access-b92dj\") pod \"machine-api-operator-755bb95488-4whj5\" (UID: \"dfeddd81-f3cd-485c-8637-053e6d8cec00\") " pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.365344 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9prt\" (UniqueName: \"kubernetes.io/projected/1d947e22-7d64-4bc5-a715-e95485fa0c57-kube-api-access-d9prt\") pod \"console-operator-67c89758df-nhmff\" (UID: \"1d947e22-7d64-4bc5-a715-e95485fa0c57\") " pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.378147 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.398803 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.402354 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.402539 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.405065 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.407398 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.410856 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.416938 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" event={"ID":"069690ff-331e-4ee8-bed5-24d79f939a40","Type":"ContainerStarted","Data":"f003e7d92a69911ed9a08eaccecfc15ac1b33b841288164084cf849f6522140e"} Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.416988 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" event={"ID":"22c4508d-2f14-4dda-9f09-05c8ad70670b","Type":"ContainerStarted","Data":"97aa59c8519e12e898050414270d2a653f0c7ea21178f46259728b3bc7a52118"} Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.417005 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-g5dxr" event={"ID":"85c879f7-5fe1-44b3-94ca-dd368a14be73","Type":"ContainerStarted","Data":"983ba90151a1c25c3404f1d769e9c4317e20070eed288d6c7c39e34191347a83"} Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.417030 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.417062 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.431061 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.432707 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.433275 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443214 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443259 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443308 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-n6btg" event={"ID":"413e3cab-21d5-4c17-9ac8-4cfb8602343c","Type":"ContainerStarted","Data":"6d5775464e980aba9fa20459608be06b62c894f6d9cf800017c88a4533b62754"} Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443347 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-nhmff"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443361 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" event={"ID":"194e9801-7419-4afa-b8f8-f0845d720283","Type":"ContainerStarted","Data":"0fcf6e1c71f2d4039d1df293c1a13d82a3b9630af138a398a78e64185e665f6d"} Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443518 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443532 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-g5dxr"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443545 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6ztm9"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443557 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-579cz"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443569 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.443585 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.447308 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.447829 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451357 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4whj5"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451404 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-prnb4"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451418 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451431 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451442 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-ljq2k"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.451460 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.452144 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.469196 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.469294 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.481185 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-94msz"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.481569 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.487026 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.507712 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.524954 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.528021 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.546908 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8pn\" (UniqueName: \"kubernetes.io/projected/51aff718-c15d-4232-8ba2-db2b79dc020a-kube-api-access-8x8pn\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.547507 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51aff718-c15d-4232-8ba2-db2b79dc020a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.547664 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.547797 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.547934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkr8g\" (UniqueName: \"kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.547937 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.548648 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.548971 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.549087 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.549110 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljll9\" (UniqueName: \"kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.549147 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.549167 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.557281 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.583687 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.587054 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.901071 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.915735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.915787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljll9\" (UniqueName: \"kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.915818 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916163 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916249 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8pn\" (UniqueName: \"kubernetes.io/projected/51aff718-c15d-4232-8ba2-db2b79dc020a-kube-api-access-8x8pn\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916287 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51aff718-c15d-4232-8ba2-db2b79dc020a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916324 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916376 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vkr8g\" (UniqueName: \"kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916620 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.916699 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.917894 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.919229 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.923166 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.923524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.935260 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.935900 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.936270 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.937837 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.939218 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.940098 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.940327 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.940512 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.940701 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.945282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/51aff718-c15d-4232-8ba2-db2b79dc020a-webhook-certs\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.946366 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.947326 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd"] Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.949620 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.951587 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.954078 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.955192 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.958986 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.959088 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.959287 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.959452 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.959559 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.964453 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:11:47 crc kubenswrapper[5121]: I0126 00:11:47.967123 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.033542 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8pn\" (UniqueName: \"kubernetes.io/projected/51aff718-c15d-4232-8ba2-db2b79dc020a-kube-api-access-8x8pn\") pod \"multus-admission-controller-69db94689b-xzxxt\" (UID: \"51aff718-c15d-4232-8ba2-db2b79dc020a\") " pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.197337 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.199670 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljll9\" (UniqueName: \"kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9\") pod \"collect-profiles-29489760-zxr7b\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.205640 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.209317 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-bgksv"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.210708 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.214534 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.214641 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.216915 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.217122 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.287487 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.287894 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.288126 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.288363 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkr8g\" (UniqueName: \"kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g\") pod \"controller-manager-65b6cccf98-tgcgk\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.291205 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296506 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-config\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296570 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296635 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296659 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296679 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b378fea-0d65-410c-86a7-e98466259ea0-serving-cert\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296704 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296723 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296749 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zd9\" (UniqueName: \"kubernetes.io/projected/2b378fea-0d65-410c-86a7-e98466259ea0-kube-api-access-85zd9\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthvx\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296834 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296858 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296926 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296944 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.296968 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7td\" (UniqueName: \"kubernetes.io/projected/644c98f5-22e8-4e28-8d95-427acc12569c-kube-api-access-df7td\") pod \"migrator-866fcbc849-rv4fb\" (UID: \"644c98f5-22e8-4e28-8d95-427acc12569c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.297557 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.797540487 +0000 UTC m=+139.956741612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.300703 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.326717 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.331309 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6xbbq"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.332502 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.347857 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.348526 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.383434 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.383559 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" event={"ID":"069690ff-331e-4ee8-bed5-24d79f939a40","Type":"ContainerStarted","Data":"f935d9c52daecd3dd24a2d39b21271e661a67d0f28604d42850020d7ac61f6c1"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.383598 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.387439 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399427 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399655 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tthvx\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.399707 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.899677541 +0000 UTC m=+140.058878666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399832 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399869 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399979 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.399999 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400020 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-df7td\" (UniqueName: \"kubernetes.io/projected/644c98f5-22e8-4e28-8d95-427acc12569c-kube-api-access-df7td\") pod \"migrator-866fcbc849-rv4fb\" (UID: \"644c98f5-22e8-4e28-8d95-427acc12569c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400072 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-config\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400133 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400210 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400228 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400247 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b378fea-0d65-410c-86a7-e98466259ea0-serving-cert\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400274 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400292 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400328 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85zd9\" (UniqueName: \"kubernetes.io/projected/2b378fea-0d65-410c-86a7-e98466259ea0-kube-api-access-85zd9\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.400565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.401072 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:48.901063482 +0000 UTC m=+140.060264607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.401270 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.401274 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.401626 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b378fea-0d65-410c-86a7-e98466259ea0-config\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.402744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.402974 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.407216 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.427686 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.451050 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.452534 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.453120 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.453709 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b378fea-0d65-410c-86a7-e98466259ea0-serving-cert\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.484786 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhklg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.484993 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.493619 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.498805 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.499846 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.500031 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.501215 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.501651 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.001611558 +0000 UTC m=+140.160812683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.501819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.502355 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.00231908 +0000 UTC m=+140.161520205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.504285 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.505146 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.509875 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.509949 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.516107 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" event={"ID":"22c4508d-2f14-4dda-9f09-05c8ad70670b","Type":"ContainerStarted","Data":"dceb5644a1cad42b182f9d95ce5d31eb9aa39ff4700acadf27a88444fcbac0c8"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.516171 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-g5dxr" event={"ID":"85c879f7-5fe1-44b3-94ca-dd368a14be73","Type":"ContainerStarted","Data":"307decf305513092944793153b55ef3a503cea2e855581bbc84f819a54d91ca0"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.516195 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.522917 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" event={"ID":"316af2c1-6a7f-4000-8926-49a441b4f1cc","Type":"ContainerStarted","Data":"d0df0da3c299eb5cffa2cc1b5fc883ae91d634431413d44929b93733d0723755"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.522983 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lhdjv"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.523033 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.523189 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.529743 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" event={"ID":"6e039297-dc55-4c6b-b76e-d2b83365ca3d","Type":"ContainerStarted","Data":"dd01485cad6dad22eb754b3ed0f203712e745c0c010da5c371fe4d56965bcde1"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.529864 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.530915 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.535876 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.536033 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540017 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" event={"ID":"cfaf2a6d-872e-498c-bffd-089932c74e19","Type":"ContainerStarted","Data":"e4c2169ab38a732044ef20f88ac367269af92d6c40c5e3d55b71ba6b7d74dcc4"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540077 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" event={"ID":"63d0a3c7-ad3c-4556-b95a-7e1143caca62","Type":"ContainerStarted","Data":"a93fb51b35ad1952efe9d7e483937ba6007f3b8f10f6558d6e55773c600b7802"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540105 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" event={"ID":"194e9801-7419-4afa-b8f8-f0845d720283","Type":"ContainerStarted","Data":"770b8dd1b3d3e317b28c6a3948571000577ff8697685eb5fe124d266846a07f2"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540129 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" event={"ID":"387d3abf-783f-4184-81db-2fa8fa54ffc8","Type":"ContainerStarted","Data":"c4d4a744af559ff847df5e8610e7dadd3c81c46c4370be0f7fcf526e6800c541"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540145 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-mh8jv"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.540223 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.544803 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" event={"ID":"eac9c212-b298-468b-a465-d924254ae8ab","Type":"ContainerStarted","Data":"1e403c59a1e2df36be4af5cfdf22f25a04dca6b1b904d7949058bafaf42eda04"} Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.544852 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.545041 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-n6btg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549624 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549637 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549647 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549657 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549669 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549681 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-94msz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549690 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549698 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549707 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549720 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-jxx48"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549728 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549737 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lhdjv"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549746 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549757 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549780 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xzxxt"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549789 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549799 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549809 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.549820 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-48wqr"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.550051 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.554647 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.554700 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xbbq"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.554716 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hs67g"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.554820 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559600 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559734 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hs67g"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559754 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-mh8jv"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559799 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559821 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559839 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p5bxm"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.559836 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.564110 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.564435 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.568885 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p5bxm"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.568985 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569005 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569018 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-g5dxr"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569054 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-n6btg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569083 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6ztm9"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569101 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569124 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569134 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569147 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-prnb4"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.569367 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.572592 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.572952 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.573225 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.588702 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.597359 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-579cz"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.601632 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.604494 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.604855 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-tmp-dir\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605361 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-serving-cert\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605417 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-stats-auth\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605495 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-default-certificate\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605594 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-webhook-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87353f19-deb2-41e6-bff6-3e2bb861ce33-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605729 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvbc\" (UniqueName: \"kubernetes.io/projected/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-kube-api-access-2xvbc\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605850 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-client\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605873 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-config\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605928 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcj4\" (UniqueName: \"kubernetes.io/projected/1d8242bd-da35-455c-b000-06d3298c3d1d-kube-api-access-dmcj4\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.605972 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvrb\" (UniqueName: \"kubernetes.io/projected/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-kube-api-access-6vvrb\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.606002 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d8242bd-da35-455c-b000-06d3298c3d1d-tmpfs\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.606280 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-serving-cert\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.606322 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.106291109 +0000 UTC m=+140.265492234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.611863 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.613704 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-metrics-certs\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.622338 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-apiservice-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.622454 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.622900 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prw2p\" (UniqueName: \"kubernetes.io/projected/87353f19-deb2-41e6-bff6-3e2bb861ce33-kube-api-access-prw2p\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.623277 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.123260659 +0000 UTC m=+140.282461784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.623333 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-config\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.623380 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.623419 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-service-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.625225 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw"] Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.627744 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.648568 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.669047 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726130 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726186 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:48 crc kubenswrapper[5121]: E0126 00:11:48.726384 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.226344222 +0000 UTC m=+140.385545347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726493 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9shw\" (UniqueName: \"kubernetes.io/projected/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-kube-api-access-t9shw\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726540 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8hk\" (UniqueName: \"kubernetes.io/projected/e71a821d-2797-4bf9-96d3-d9a384e336e1-kube-api-access-ld8hk\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726574 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/df3bac84-ca0c-4b27-a190-a808916babea-tmp-dir\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726599 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj88q\" (UniqueName: \"kubernetes.io/projected/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-kube-api-access-mj88q\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726632 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-plugins-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726751 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhjw\" (UniqueName: \"kubernetes.io/projected/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-kube-api-access-cmhjw\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726797 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrsq\" (UniqueName: \"kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726845 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-srv-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726923 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-mountpoint-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.726953 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2019e529-0498-4aa1-b3f9-65c63707d280-tmpfs\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.727004 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-tmp-dir\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.727029 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnc98\" (UniqueName: \"kubernetes.io/projected/25b4983a-dbb4-499e-9b78-ef637f425116-kube-api-access-qnc98\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.727116 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-config\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728506 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-serving-cert\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728574 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv4h4\" (UniqueName: \"kubernetes.io/projected/2019e529-0498-4aa1-b3f9-65c63707d280-kube-api-access-bv4h4\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728614 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-stats-auth\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728666 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-tmp-dir\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728696 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-socket-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728726 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-serving-cert\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.728751 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.729240 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-tmp-dir\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.734886 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.734970 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-default-certificate\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.735019 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-cabundle\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.735059 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.735087 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2gb\" (UniqueName: \"kubernetes.io/projected/18a087c3-ca43-45fb-bacd-4689a2362ac0-kube-api-access-4j2gb\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.736002 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5f5\" (UniqueName: \"kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.736111 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-webhook-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.736636 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87353f19-deb2-41e6-bff6-3e2bb861ce33-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.736688 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwww\" (UniqueName: \"kubernetes.io/projected/51903662-2d95-48d2-b713-8ae2f2885e8b-kube-api-access-9xwww\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.736988 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xvbc\" (UniqueName: \"kubernetes.io/projected/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-kube-api-access-2xvbc\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737334 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737467 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df3bac84-ca0c-4b27-a190-a808916babea-metrics-tls\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737578 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsps\" (UniqueName: \"kubernetes.io/projected/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-kube-api-access-5xsps\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737701 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-node-bootstrap-token\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737865 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-client\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737995 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-config\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738119 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-metrics-tls\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738228 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51903662-2d95-48d2-b713-8ae2f2885e8b-cert\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738343 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvr2c\" (UniqueName: \"kubernetes.io/projected/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-kube-api-access-xvr2c\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738468 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3bac84-ca0c-4b27-a190-a808916babea-config-volume\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738576 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738693 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-config\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738699 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738781 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w46d\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-kube-api-access-7w46d\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmcj4\" (UniqueName: \"kubernetes.io/projected/1d8242bd-da35-455c-b000-06d3298c3d1d-kube-api-access-dmcj4\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738843 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738866 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71a821d-2797-4bf9-96d3-d9a384e336e1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738914 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-tmpfs\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738945 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.738996 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvrb\" (UniqueName: \"kubernetes.io/projected/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-kube-api-access-6vvrb\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739031 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d8242bd-da35-455c-b000-06d3298c3d1d-tmpfs\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739077 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-serving-cert\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739103 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25b4983a-dbb4-499e-9b78-ef637f425116-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739136 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-key\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739195 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-metrics-certs\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-apiservice-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739414 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-csi-data-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739450 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739490 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-srv-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739531 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739553 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739584 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tzs\" (UniqueName: \"kubernetes.io/projected/197eb808-9411-4b4c-b882-85f9c3479dae-kube-api-access-g6tzs\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739614 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prw2p\" (UniqueName: \"kubernetes.io/projected/87353f19-deb2-41e6-bff6-3e2bb861ce33-kube-api-access-prw2p\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739639 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-registration-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739661 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71a821d-2797-4bf9-96d3-d9a384e336e1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739710 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-config\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739732 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739756 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739830 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m95\" (UniqueName: \"kubernetes.io/projected/df3bac84-ca0c-4b27-a190-a808916babea-kube-api-access-w8m95\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739859 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-service-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739883 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739907 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739947 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739968 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-certs\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.739994 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.740711 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d8242bd-da35-455c-b000-06d3298c3d1d-tmpfs\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:48 crc kubenswrapper[5121]: I0126 00:11:48.737711 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87353f19-deb2-41e6-bff6-3e2bb861ce33-service-ca-bundle\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.923455 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-config\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.924112 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991590 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991704 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m95\" (UniqueName: \"kubernetes.io/projected/df3bac84-ca0c-4b27-a190-a808916babea-kube-api-access-w8m95\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991741 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991801 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991838 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991865 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-certs\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991892 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991948 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9shw\" (UniqueName: \"kubernetes.io/projected/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-kube-api-access-t9shw\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.991975 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8hk\" (UniqueName: \"kubernetes.io/projected/e71a821d-2797-4bf9-96d3-d9a384e336e1-kube-api-access-ld8hk\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992005 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/df3bac84-ca0c-4b27-a190-a808916babea-tmp-dir\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mj88q\" (UniqueName: \"kubernetes.io/projected/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-kube-api-access-mj88q\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992064 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-plugins-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992110 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhjw\" (UniqueName: \"kubernetes.io/projected/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-kube-api-access-cmhjw\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992138 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrsq\" (UniqueName: \"kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992159 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-srv-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992188 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-mountpoint-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992217 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2019e529-0498-4aa1-b3f9-65c63707d280-tmpfs\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992250 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qnc98\" (UniqueName: \"kubernetes.io/projected/25b4983a-dbb4-499e-9b78-ef637f425116-kube-api-access-qnc98\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992277 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-config\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992309 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bv4h4\" (UniqueName: \"kubernetes.io/projected/2019e529-0498-4aa1-b3f9-65c63707d280-kube-api-access-bv4h4\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992338 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-tmp-dir\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992371 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-socket-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992396 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-serving-cert\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992420 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992484 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992521 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-cabundle\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992550 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992579 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2gb\" (UniqueName: \"kubernetes.io/projected/18a087c3-ca43-45fb-bacd-4689a2362ac0-kube-api-access-4j2gb\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992626 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wz5f5\" (UniqueName: \"kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992710 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwww\" (UniqueName: \"kubernetes.io/projected/51903662-2d95-48d2-b713-8ae2f2885e8b-kube-api-access-9xwww\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992800 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992828 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df3bac84-ca0c-4b27-a190-a808916babea-metrics-tls\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.992857 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsps\" (UniqueName: \"kubernetes.io/projected/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-kube-api-access-5xsps\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:48.993740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-serving-cert\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.041471 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-client\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.042031 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-serving-cert\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.042474 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-stats-auth\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.042948 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-webhook-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.044175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-etcd-service-ca\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.044790 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-mountpoint-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.044828 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zd9\" (UniqueName: \"kubernetes.io/projected/2b378fea-0d65-410c-86a7-e98466259ea0-kube-api-access-85zd9\") pod \"authentication-operator-7f5c659b84-4j9qb\" (UID: \"2b378fea-0d65-410c-86a7-e98466259ea0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.044985 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.045008 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.045130 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.045201 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.045274 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.045671 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.046978 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-tmp-dir\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.047826 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-default-certificate\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048105 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-node-bootstrap-token\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048193 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-metrics-tls\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048221 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51903662-2d95-48d2-b713-8ae2f2885e8b-cert\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048266 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvr2c\" (UniqueName: \"kubernetes.io/projected/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-kube-api-access-xvr2c\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048317 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3bac84-ca0c-4b27-a190-a808916babea-config-volume\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.048881 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.049509 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-socket-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.051558 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2019e529-0498-4aa1-b3f9-65c63707d280-tmpfs\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.052355 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8242bd-da35-455c-b000-06d3298c3d1d-apiservice-cert\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.052726 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-plugins-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.054565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87353f19-deb2-41e6-bff6-3e2bb861ce33-metrics-certs\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.055446 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.555423864 +0000 UTC m=+140.714624989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.056396 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/df3bac84-ca0c-4b27-a190-a808916babea-tmp-dir\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.056705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.056751 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7w46d\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-kube-api-access-7w46d\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.056803 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthvx\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057081 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057504 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057542 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057633 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71a821d-2797-4bf9-96d3-d9a384e336e1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057664 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-tmpfs\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057673 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057694 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057840 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25b4983a-dbb4-499e-9b78-ef637f425116-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057873 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-key\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057933 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-csi-data-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.057976 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-srv-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058003 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058025 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058057 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tzs\" (UniqueName: \"kubernetes.io/projected/197eb808-9411-4b4c-b882-85f9c3479dae-kube-api-access-g6tzs\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058098 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-registration-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058119 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71a821d-2797-4bf9-96d3-d9a384e336e1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.058230 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-tmpfs\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.059752 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.060092 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.060215 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.067266 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-csi-data-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.067546 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.068559 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/197eb808-9411-4b4c-b882-85f9c3479dae-registration-dir\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.068702 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.072210 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-config\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.075442 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-srv-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.079679 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.080745 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.082365 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-serving-cert\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.083410 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.083721 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.084450 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.086718 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.090184 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-profile-collector-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.094020 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2019e529-0498-4aa1-b3f9-65c63707d280-srv-cert\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.096135 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.097070 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51903662-2d95-48d2-b713-8ae2f2885e8b-cert\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.098359 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/25b4983a-dbb4-499e-9b78-ef637f425116-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.099368 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-ljq2k"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.099879 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.100258 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.135718 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.136900 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7td\" (UniqueName: \"kubernetes.io/projected/644c98f5-22e8-4e28-8d95-427acc12569c-kube-api-access-df7td\") pod \"migrator-866fcbc849-rv4fb\" (UID: \"644c98f5-22e8-4e28-8d95-427acc12569c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.138737 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.139917 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.141479 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-metrics-tls\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.149061 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.149159 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:11:49 crc kubenswrapper[5121]: W0126 00:11:49.149422 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da119f5_ef9e_41d0_adef_a5e261563611.slice/crio-9608db7a102327b7bc04859b6b89a1dde390594b45de12e5c2c6bcc889cc2e1a WatchSource:0}: Error finding container 9608db7a102327b7bc04859b6b89a1dde390594b45de12e5c2c6bcc889cc2e1a: Status 404 returned error can't find the container with id 9608db7a102327b7bc04859b6b89a1dde390594b45de12e5c2c6bcc889cc2e1a Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.150716 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.159624 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.160560 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.660535298 +0000 UTC m=+140.819736423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.170082 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.187130 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.198529 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.203596 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.213532 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.261773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.262097 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.762081313 +0000 UTC m=+140.921282438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.335849 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.363924 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.365021 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.86498483 +0000 UTC m=+141.024185955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.371417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.372334 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.372803 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.372993 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.373018 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.373203 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.373289 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.374933 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71a821d-2797-4bf9-96d3-d9a384e336e1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.381018 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.412678 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-key\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.414357 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-signing-cabundle\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.467597 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71a821d-2797-4bf9-96d3-d9a384e336e1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.468655 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.469200 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:49.969184326 +0000 UTC m=+141.128385451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.472779 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.478045 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-nhmff"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.480839 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.482328 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.482653 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.488157 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.490892 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.495841 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.511139 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.512090 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.513873 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" event={"ID":"aaefaab3-bc8a-4e99-8114-7b929c835941","Type":"ContainerStarted","Data":"a33e55a021f1035d7ae5f38e4da63c2b191892b1e5ea1ecfce7aefc52fc5daff"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.528210 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.528688 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" event={"ID":"eac9c212-b298-468b-a465-d924254ae8ab","Type":"ContainerStarted","Data":"126ec523caf9ee3a46284a8a1d1891b443ea45b0b94ccf25c0554edf1e68a240"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.530083 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.536122 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-certs\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.543441 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-4whj5"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.550332 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.550786 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.550900 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.557019 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-jxx48"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.564784 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-n6btg" event={"ID":"413e3cab-21d5-4c17-9ac8-4cfb8602343c","Type":"ContainerStarted","Data":"a033e685a3035e7502669160a363774731135008c8bcb6ed59679dad5a6da2d9"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.565279 5121 request.go:752] "Waited before sending request" delay="1.010145249s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.569573 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.569900 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.069869906 +0000 UTC m=+141.229071041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.569935 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.570493 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.571907 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.071885957 +0000 UTC m=+141.231087112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.575845 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" event={"ID":"a7358b62-8abe-4d72-ae2d-29f96ed81902","Type":"ContainerStarted","Data":"70f61d6e678df4be9ee461a0be16d81441fffff26d928e097880d8710bab4cc6"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.581348 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" event={"ID":"316af2c1-6a7f-4000-8926-49a441b4f1cc","Type":"ContainerStarted","Data":"61c8c15510efb7fe208f54b23063094b543b2a3f1d7f2e5c53c14c1aa671fde9"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.584614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/18a087c3-ca43-45fb-bacd-4689a2362ac0-node-bootstrap-token\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.588109 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.589673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" event={"ID":"9da119f5-ef9e-41d0-adef-a5e261563611","Type":"ContainerStarted","Data":"9608db7a102327b7bc04859b6b89a1dde390594b45de12e5c2c6bcc889cc2e1a"} Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.608612 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.618358 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.618567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.627673 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.647863 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.648860 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3bac84-ca0c-4b27-a190-a808916babea-config-volume\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.657933 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-xzxxt"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.669261 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.673942 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.674174 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.174123913 +0000 UTC m=+141.333325048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.676628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.677323 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.177300909 +0000 UTC m=+141.336502034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.689015 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.693229 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/df3bac84-ca0c-4b27-a190-a808916babea-metrics-tls\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: W0126 00:11:49.700399 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e2dc1c_f659_4dc2_a18d_141f468e666a.slice/crio-64766890c147c7bcfe0ee4fc626955de190fb4f97ec1b25f0642c9d559872823 WatchSource:0}: Error finding container 64766890c147c7bcfe0ee4fc626955de190fb4f97ec1b25f0642c9d559872823: Status 404 returned error can't find the container with id 64766890c147c7bcfe0ee4fc626955de190fb4f97ec1b25f0642c9d559872823 Jan 26 00:11:49 crc kubenswrapper[5121]: W0126 00:11:49.710438 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78781662_c6e5_43f1_8914_a11c064230ca.slice/crio-bcb77203d108e201aca995c27f2fa076e1fc0aa8634bb7af52c83ff4c3755790 WatchSource:0}: Error finding container bcb77203d108e201aca995c27f2fa076e1fc0aa8634bb7af52c83ff4c3755790: Status 404 returned error can't find the container with id bcb77203d108e201aca995c27f2fa076e1fc0aa8634bb7af52c83ff4c3755790 Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.755815 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xvbc\" (UniqueName: \"kubernetes.io/projected/e93d8f25-2b5b-4f00-a6a7-bc1ee0690800-kube-api-access-2xvbc\") pod \"service-ca-operator-5b9c976747-8flxd\" (UID: \"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.760459 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmcj4\" (UniqueName: \"kubernetes.io/projected/1d8242bd-da35-455c-b000-06d3298c3d1d-kube-api-access-dmcj4\") pod \"packageserver-7d4fc7d867-ztcvg\" (UID: \"1d8242bd-da35-455c-b000-06d3298c3d1d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.775618 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvrb\" (UniqueName: \"kubernetes.io/projected/d013c3f9-0e7e-4b67-9fd0-6f9e14c64287-kube-api-access-6vvrb\") pod \"etcd-operator-69b85846b6-94msz\" (UID: \"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.780781 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.781262 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.281224867 +0000 UTC m=+141.440425992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.797435 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prw2p\" (UniqueName: \"kubernetes.io/projected/87353f19-deb2-41e6-bff6-3e2bb861ce33-kube-api-access-prw2p\") pod \"router-default-68cf44c8b8-bgksv\" (UID: \"87353f19-deb2-41e6-bff6-3e2bb861ce33\") " pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.838575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m95\" (UniqueName: \"kubernetes.io/projected/df3bac84-ca0c-4b27-a190-a808916babea-kube-api-access-w8m95\") pod \"dns-default-p5bxm\" (UID: \"df3bac84-ca0c-4b27-a190-a808916babea\") " pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.857335 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8hk\" (UniqueName: \"kubernetes.io/projected/e71a821d-2797-4bf9-96d3-d9a384e336e1-kube-api-access-ld8hk\") pod \"kube-storage-version-migrator-operator-565b79b866-5r8lz\" (UID: \"e71a821d-2797-4bf9-96d3-d9a384e336e1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.878890 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2gb\" (UniqueName: \"kubernetes.io/projected/18a087c3-ca43-45fb-bacd-4689a2362ac0-kube-api-access-4j2gb\") pod \"machine-config-server-48wqr\" (UID: \"18a087c3-ca43-45fb-bacd-4689a2362ac0\") " pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.883338 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.884141 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.384126433 +0000 UTC m=+141.543327558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.890687 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb"] Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.896806 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwww\" (UniqueName: \"kubernetes.io/projected/51903662-2d95-48d2-b713-8ae2f2885e8b-kube-api-access-9xwww\") pod \"ingress-canary-6xbbq\" (UID: \"51903662-2d95-48d2-b713-8ae2f2885e8b\") " pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.905995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9shw\" (UniqueName: \"kubernetes.io/projected/420ea536-e22c-4ded-972a-3fe1ad5bc1ce-kube-api-access-t9shw\") pod \"service-ca-74545575db-mh8jv\" (UID: \"420ea536-e22c-4ded-972a-3fe1ad5bc1ce\") " pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.915545 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.918214 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.936971 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnc98\" (UniqueName: \"kubernetes.io/projected/25b4983a-dbb4-499e-9b78-ef637f425116-kube-api-access-qnc98\") pod \"package-server-manager-77f986bd66-hkvjl\" (UID: \"25b4983a-dbb4-499e-9b78-ef637f425116\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.962348 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv4h4\" (UniqueName: \"kubernetes.io/projected/2019e529-0498-4aa1-b3f9-65c63707d280-kube-api-access-bv4h4\") pod \"olm-operator-5cdf44d969-ldf8d\" (UID: \"2019e529-0498-4aa1-b3f9-65c63707d280\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.973243 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsps\" (UniqueName: \"kubernetes.io/projected/67f7f3b0-5f2e-4242-be97-3e765a5ea9e0-kube-api-access-5xsps\") pod \"control-plane-machine-set-operator-75ffdb6fcd-66dzp\" (UID: \"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.987456 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:49 crc kubenswrapper[5121]: E0126 00:11:49.989782 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.489738162 +0000 UTC m=+141.648939287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:49 crc kubenswrapper[5121]: I0126 00:11:49.991990 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj88q\" (UniqueName: \"kubernetes.io/projected/a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b-kube-api-access-mj88q\") pod \"catalog-operator-75ff9f647d-r5x7x\" (UID: \"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.023673 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrsq\" (UniqueName: \"kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq\") pod \"cni-sysctl-allowlist-ds-dhklg\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.028321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz5f5\" (UniqueName: \"kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5\") pod \"marketplace-operator-547dbd544d-926kg\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.073347 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91c2eb8f-4a83-425b-b2f3-2b034728d8f1-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w75l2\" (UID: \"91c2eb8f-4a83-425b-b2f3-2b034728d8f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.075928 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhjw\" (UniqueName: \"kubernetes.io/projected/eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee-kube-api-access-cmhjw\") pod \"dns-operator-799b87ffcd-lhdjv\" (UID: \"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.097159 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.097648 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.597629789 +0000 UTC m=+141.756830924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.128954 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvr2c\" (UniqueName: \"kubernetes.io/projected/1ca0eaab-8776-4e08-811e-cb35fbe8f6a2-kube-api-access-xvr2c\") pod \"machine-config-controller-f9cdd68f7-l7fcz\" (UID: \"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.153881 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tzs\" (UniqueName: \"kubernetes.io/projected/197eb808-9411-4b4c-b882-85f9c3479dae-kube-api-access-g6tzs\") pod \"csi-hostpathplugin-hs67g\" (UID: \"197eb808-9411-4b4c-b882-85f9c3479dae\") " pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.156253 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w46d\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-kube-api-access-7w46d\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.157792 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5d73c73-c20a-43a6-b318-bfe8557d4dbb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nsl8g\" (UID: \"a5d73c73-c20a-43a6-b318-bfe8557d4dbb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.198532 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.198858 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.698810824 +0000 UTC m=+141.858011949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.199793 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.200331 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.700315399 +0000 UTC m=+141.859516524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.304408 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.304799 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.80471416 +0000 UTC m=+141.963915295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.363908 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b"] Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.380292 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.406724 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.408187 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:50.908169844 +0000 UTC m=+142.067370969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.505231 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.508746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.509630 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.009599436 +0000 UTC m=+142.168800561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.630799 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.631819 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.131795373 +0000 UTC m=+142.290996508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.658316 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" event={"ID":"a7358b62-8abe-4d72-ae2d-29f96ed81902","Type":"ContainerStarted","Data":"0b658c32212d448bbb19830eac29329359f8fb1a13b6642c5dbab32b2a6d7cb0"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.661590 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" event={"ID":"85bedc20-2632-45f3-bfac-d20d34024cb3","Type":"ContainerStarted","Data":"c704fdd42d3a2b691c42f651d33f62552a610b3bdb235627857f514d9db0699c"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.663201 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" event={"ID":"3bd78e9f-18ce-4592-866f-029d883e2d95","Type":"ContainerStarted","Data":"bd5730c26030ce728948e5c0fef46c54dcf663668446ea8b039117f8f91df8dd"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.664063 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" event={"ID":"78781662-c6e5-43f1-8914-a11c064230ca","Type":"ContainerStarted","Data":"bcb77203d108e201aca995c27f2fa076e1fc0aa8634bb7af52c83ff4c3755790"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.700192 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" event={"ID":"dfeddd81-f3cd-485c-8637-053e6d8cec00","Type":"ContainerStarted","Data":"1699c468854b88fb120734b83015a918585184dbfbca73ff65f46ffaa258bf76"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.700292 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" event={"ID":"dfeddd81-f3cd-485c-8637-053e6d8cec00","Type":"ContainerStarted","Data":"e5535a49c0e96a24d20bd3049d1e5a006564b9a72ac263c788444fb50cb49f58"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.704817 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podStartSLOduration=113.70479891 podStartE2EDuration="1m53.70479891s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:50.704321915 +0000 UTC m=+141.863523070" watchObservedRunningTime="2026-01-26 00:11:50.70479891 +0000 UTC m=+141.864000035" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.705478 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29489760-n6btg" podStartSLOduration=114.70547203 podStartE2EDuration="1m54.70547203s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:50.670809507 +0000 UTC m=+141.830010632" watchObservedRunningTime="2026-01-26 00:11:50.70547203 +0000 UTC m=+141.864673155" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.714934 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" event={"ID":"069690ff-331e-4ee8-bed5-24d79f939a40","Type":"ContainerStarted","Data":"a7fc03e9c26703c07aed73480a9915b133eaacb1520bc003aa5b9cf5dfbab35d"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.718416 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" event={"ID":"316af2c1-6a7f-4000-8926-49a441b4f1cc","Type":"ContainerStarted","Data":"3c8d7b18b01cc991cc62ff3b6c34d3e5fee1af0edb15bf70e2d89eb3b6ac74bb"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.720862 5121 generic.go:358] "Generic (PLEG): container finished" podID="6e039297-dc55-4c6b-b76e-d2b83365ca3d" containerID="05c3371529d72c91a2dd2cb068ec3f985f91beb843ea3b84274e130fb809262b" exitCode=0 Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.720922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" event={"ID":"6e039297-dc55-4c6b-b76e-d2b83365ca3d","Type":"ContainerDied","Data":"05c3371529d72c91a2dd2cb068ec3f985f91beb843ea3b84274e130fb809262b"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.726277 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" event={"ID":"9da119f5-ef9e-41d0-adef-a5e261563611","Type":"ContainerStarted","Data":"3deaf3193f0c1c328766e972ad6874b0dc4ce1fd325f7fb3f6d8fd9549bf58d6"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.732648 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.733162 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.233136253 +0000 UTC m=+142.392337378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.735755 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerStarted","Data":"64766890c147c7bcfe0ee4fc626955de190fb4f97ec1b25f0642c9d559872823"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.779713 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" event={"ID":"2b378fea-0d65-410c-86a7-e98466259ea0","Type":"ContainerStarted","Data":"387ff91a98b6809c4ab41e5ea068fc73df8313748d7375d7f4f1e215df2bbc5c"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.782826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" event={"ID":"387d3abf-783f-4184-81db-2fa8fa54ffc8","Type":"ContainerStarted","Data":"d221118f7e80730d9602701da654fc027f1f7b7f0224698f83da1c05b0f84ec2"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.784111 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.787463 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6ztm9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.787777 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.788113 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" event={"ID":"51aff718-c15d-4232-8ba2-db2b79dc020a","Type":"ContainerStarted","Data":"59b43c30036197b9892824fc5e2b94a1fc45f78d66ab8bd660a868f90c35dc71"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.791052 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" event={"ID":"63d0a3c7-ad3c-4556-b95a-7e1143caca62","Type":"ContainerStarted","Data":"3c72364ac720f3ca48296d64b5ffe8c3e04d84aa691f1581b802c796b37a0f26"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.793296 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-nhmff" event={"ID":"1d947e22-7d64-4bc5-a715-e95485fa0c57","Type":"ContainerStarted","Data":"5076750e9ca885315a5e7b157eb9dc42e7aa3d99984c502b7d31d9437956502d"} Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.824893 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.825078 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 00:11:50 crc kubenswrapper[5121]: I0126 00:11:50.834968 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:50 crc kubenswrapper[5121]: E0126 00:11:50.836995 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.336973417 +0000 UTC m=+142.496174542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.013558 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.013776 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.513721426 +0000 UTC m=+142.672922551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.014649 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.022116 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.522084828 +0000 UTC m=+142.681285963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.153182 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.153526 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.653501133 +0000 UTC m=+142.812702258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.154507 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.154973 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.654960276 +0000 UTC m=+142.814161401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.210663 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb"] Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.229447 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6xbbq" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.232126 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.255365 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.255797 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.75576561 +0000 UTC m=+142.914966735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.256033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.257411 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.757400529 +0000 UTC m=+142.916601704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.295300 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg"] Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.359545 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.360640 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.859748189 +0000 UTC m=+143.018949314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.361477 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.362007 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.861982346 +0000 UTC m=+143.021183531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.439439 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-zxxq5" podStartSLOduration=115.439417236 podStartE2EDuration="1m55.439417236s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:51.384391391 +0000 UTC m=+142.543592526" watchObservedRunningTime="2026-01-26 00:11:51.439417236 +0000 UTC m=+142.598618361" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.462949 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.463490 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:51.96346601 +0000 UTC m=+143.122667135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.573263 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.573607 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.073593524 +0000 UTC m=+143.232794649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.588176 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.677608 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.678210 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.178167451 +0000 UTC m=+143.337368576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.780780 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.781474 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.281451279 +0000 UTC m=+143.440652464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.864078 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-94msz"] Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.894096 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-nhmff" event={"ID":"1d947e22-7d64-4bc5-a715-e95485fa0c57","Type":"ContainerStarted","Data":"600c6dbf4bcea3257d37177b920c9390a6f6db3394f26d3a8b7a9d46a6f4c546"} Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.894885 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:51 crc kubenswrapper[5121]: E0126 00:11:51.895473 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.39545004 +0000 UTC m=+143.554651165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.896068 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.903821 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" event={"ID":"87353f19-deb2-41e6-bff6-3e2bb861ce33","Type":"ContainerStarted","Data":"7d3a048276360ddf7f844a9794b2d5580b49270ff6afaead92fc45bbbde1e190"} Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.904872 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" event={"ID":"644c98f5-22e8-4e28-8d95-427acc12569c","Type":"ContainerStarted","Data":"90c90956d3f1765f2bf712f0bd90e5d5ed3126da902e36279019a5fb3d60ddda"} Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.906367 5121 generic.go:358] "Generic (PLEG): container finished" podID="9da119f5-ef9e-41d0-adef-a5e261563611" containerID="3deaf3193f0c1c328766e972ad6874b0dc4ce1fd325f7fb3f6d8fd9549bf58d6" exitCode=0 Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.906466 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" event={"ID":"9da119f5-ef9e-41d0-adef-a5e261563611","Type":"ContainerDied","Data":"3deaf3193f0c1c328766e972ad6874b0dc4ce1fd325f7fb3f6d8fd9549bf58d6"} Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.907712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerStarted","Data":"ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67"} Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.908471 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.909094 5121 patch_prober.go:28] interesting pod/console-operator-67c89758df-nhmff container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.909133 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-nhmff" podUID="1d947e22-7d64-4bc5-a715-e95485fa0c57" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.921148 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.921221 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:11:51 crc kubenswrapper[5121]: I0126 00:11:51.978507 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" event={"ID":"aaefaab3-bc8a-4e99-8114-7b929c835941","Type":"ContainerStarted","Data":"3125bd2436d36fc0f2743589f6fd38ac9a8a94049f96b6788a81d5538def0480"} Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:51.998446 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:51.999739 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.499682966 +0000 UTC m=+143.658884091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.022741 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" event={"ID":"1d8242bd-da35-455c-b000-06d3298c3d1d","Type":"ContainerStarted","Data":"4f3766d5ca2ba56ce2c0ba5e0e1f0c700bd59168829dac2de6888bab166e3f54"} Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.023429 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6ztm9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.023500 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.071671 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd"] Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.102105 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.107693 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.607613854 +0000 UTC m=+143.766815039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.187019 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.215510 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.215964 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.715946104 +0000 UTC m=+143.875147230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.318367 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.320012 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.819982295 +0000 UTC m=+143.979183420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.496257 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.497214 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:52.997181058 +0000 UTC m=+144.156382183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.552634 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.554982 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.555975 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6xbbq"] Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.629450 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.631221 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.131196901 +0000 UTC m=+144.290398026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.632465 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.653153 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.712367 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngcw5" podStartSLOduration=116.712210999 podStartE2EDuration="1m56.712210999s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.631785498 +0000 UTC m=+143.790986633" watchObservedRunningTime="2026-01-26 00:11:52.712210999 +0000 UTC m=+143.871412124" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.732013 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.733374 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.233359225 +0000 UTC m=+144.392560350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.825493 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-g5dxr" podStartSLOduration=116.825461257 podStartE2EDuration="1m56.825461257s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:52.822165648 +0000 UTC m=+143.981366773" watchObservedRunningTime="2026-01-26 00:11:52.825461257 +0000 UTC m=+143.984662382" Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.834470 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.834822 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.334801718 +0000 UTC m=+144.494002843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.905684 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp"] Jan 26 00:11:52 crc kubenswrapper[5121]: I0126 00:11:52.936260 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:52 crc kubenswrapper[5121]: E0126 00:11:52.936734 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.436715935 +0000 UTC m=+144.595917060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.039954 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.040363 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.540340993 +0000 UTC m=+144.699542118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.040430 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" event={"ID":"85bedc20-2632-45f3-bfac-d20d34024cb3","Type":"ContainerStarted","Data":"22a2b08e25d5c73e796608d918f29d3646e67adaa49611c33ce57e43bebd12bd"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.043917 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" event={"ID":"946bd7f5-92cd-435d-9ff8-72af506917be","Type":"ContainerStarted","Data":"72b3fd11557c617b301ee09bd29315ddbfd873bc6144cf3e6744267484a5af55"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.045422 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xbbq" event={"ID":"51903662-2d95-48d2-b713-8ae2f2885e8b","Type":"ContainerStarted","Data":"24b19ea8a9c8f5e58dd8594c3c694de4ba0d2c4ee53f42579c6e978e528d20f6"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.048184 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" event={"ID":"78781662-c6e5-43f1-8914-a11c064230ca","Type":"ContainerStarted","Data":"9bfe21660e4297076895acff14c1840cdb69d0f276a1b49d4cb27dd228e3d78c"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.053490 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" event={"ID":"dfeddd81-f3cd-485c-8637-053e6d8cec00","Type":"ContainerStarted","Data":"c87150bee53b70147e75178e364f6665076b0c183f0b2e343fcc56c76d8d0d8b"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.065662 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" event={"ID":"9da119f5-ef9e-41d0-adef-a5e261563611","Type":"ContainerStarted","Data":"21071edcff0c2ad526b0fa13d3f83666742e74257e6b8aa126ce333bb6a77f37"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.068735 5121 generic.go:358] "Generic (PLEG): container finished" podID="cfaf2a6d-872e-498c-bffd-089932c74e19" containerID="1d773cedc273173646956950ece2481e3f36b85363f8fd009a6502f8b0d288f1" exitCode=0 Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.068900 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" event={"ID":"cfaf2a6d-872e-498c-bffd-089932c74e19","Type":"ContainerDied","Data":"1d773cedc273173646956950ece2481e3f36b85363f8fd009a6502f8b0d288f1"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.102477 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" event={"ID":"2b378fea-0d65-410c-86a7-e98466259ea0","Type":"ContainerStarted","Data":"048741849ef00e3ba17c5e8e58c3510121295a5cbd1168e21228f2ff5b7c23dd"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.105541 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" event={"ID":"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800","Type":"ContainerStarted","Data":"259b855a9fb7e06ec85452e5647f48372cf30552d0cd20879181568ae4c23858"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.106624 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" event={"ID":"51aff718-c15d-4232-8ba2-db2b79dc020a","Type":"ContainerStarted","Data":"59e57e19fe7c5da164649c86d9bde8c104344176326914f06e33851ccbdd8bb7"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.108401 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" event={"ID":"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0","Type":"ContainerStarted","Data":"1b179d1310a66e838ddfb1d6c1d0a46b1b7985a4a3bf0601569dc8c3b5999fb1"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.117485 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" event={"ID":"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287","Type":"ContainerStarted","Data":"ce3e3c6bea6a434e7330c7b778829004661342d0eece6eb91c6e092334865a66"} Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.143166 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.144061 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.644041434 +0000 UTC m=+144.803242559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.154911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.247175 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.248428 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.748404335 +0000 UTC m=+144.907605470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.251335 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.251846 5121 patch_prober.go:28] interesting pod/console-operator-67c89758df-nhmff container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.251889 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-nhmff" podUID="1d947e22-7d64-4bc5-a715-e95485fa0c57" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.252170 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.252225 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.282307 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6ztm9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.282533 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.282738 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.282902 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.330469 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.346294 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41092: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.350557 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.366712 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.368662 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.868638893 +0000 UTC m=+145.027840028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.428949 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.441844 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41094: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.453003 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.454495 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:53.954465016 +0000 UTC m=+145.113666151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.477906 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.483839 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vbxd4" podStartSLOduration=117.483812529 podStartE2EDuration="1m57.483812529s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.406841103 +0000 UTC m=+144.566042248" watchObservedRunningTime="2026-01-26 00:11:53.483812529 +0000 UTC m=+144.643013654" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.484610 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-nhmff" podStartSLOduration=117.484604253 podStartE2EDuration="1m57.484604253s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.449352152 +0000 UTC m=+144.608553287" watchObservedRunningTime="2026-01-26 00:11:53.484604253 +0000 UTC m=+144.643805378" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.549246 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41104: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.555409 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.557714 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.057692012 +0000 UTC m=+145.216893167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.652635 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41112: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.658801 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.659158 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.159123985 +0000 UTC m=+145.318325110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.659461 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.659842 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.159829646 +0000 UTC m=+145.319030771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.662031 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-htdxn" podStartSLOduration=117.662014282 podStartE2EDuration="1m57.662014282s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.660902548 +0000 UTC m=+144.820103693" watchObservedRunningTime="2026-01-26 00:11:53.662014282 +0000 UTC m=+144.821215407" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.720669 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-mh8jv" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.761837 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.762254 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.262216627 +0000 UTC m=+145.421417762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.764680 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.767159 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.267139775 +0000 UTC m=+145.426340910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.761747 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podStartSLOduration=117.761725342 podStartE2EDuration="1m57.761725342s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.758519946 +0000 UTC m=+144.917721071" watchObservedRunningTime="2026-01-26 00:11:53.761725342 +0000 UTC m=+144.920926467" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.780403 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41114: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.783699 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2"] Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.843452 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4j9qb" podStartSLOduration=117.843431941 podStartE2EDuration="1m57.843431941s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.801975844 +0000 UTC m=+144.961176989" watchObservedRunningTime="2026-01-26 00:11:53.843431941 +0000 UTC m=+145.002633076" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.848310 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-x9ptc" podStartSLOduration=117.848285607 podStartE2EDuration="1m57.848285607s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.842188644 +0000 UTC m=+145.001389779" watchObservedRunningTime="2026-01-26 00:11:53.848285607 +0000 UTC m=+145.007486742" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.868890 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.870010 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.36998461 +0000 UTC m=+145.529185745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.902186 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41128: no serving certificate available for the kubelet" Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.917740 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl"] Jan 26 00:11:53 crc kubenswrapper[5121]: I0126 00:11:53.984858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:53 crc kubenswrapper[5121]: E0126 00:11:53.985267 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.485249699 +0000 UTC m=+145.644450824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.021400 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41130: no serving certificate available for the kubelet" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.042420 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podStartSLOduration=118.042391549 podStartE2EDuration="1m58.042391549s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:53.974231127 +0000 UTC m=+145.133432272" watchObservedRunningTime="2026-01-26 00:11:54.042391549 +0000 UTC m=+145.201592674" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.085937 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.086442 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.097639 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.59759573 +0000 UTC m=+145.756796855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.104332 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.105036 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.605018923 +0000 UTC m=+145.764220048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.148500 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-jxx48" podStartSLOduration=118.148476871 podStartE2EDuration="1m58.148476871s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.084911508 +0000 UTC m=+145.244112653" watchObservedRunningTime="2026-01-26 00:11:54.148476871 +0000 UTC m=+145.307677996" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.152728 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x"] Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.160045 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41134: no serving certificate available for the kubelet" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.183782 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" event={"ID":"85bedc20-2632-45f3-bfac-d20d34024cb3","Type":"ContainerStarted","Data":"c1481894124fbf41a0c9beeae69167e516e8e5154acb0b49c85d045070c96f35"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.182463 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nbmc7" podStartSLOduration=118.182443303 podStartE2EDuration="1m58.182443303s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.174481784 +0000 UTC m=+145.333682929" watchObservedRunningTime="2026-01-26 00:11:54.182443303 +0000 UTC m=+145.341644428" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.184924 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-48wqr" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.200862 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6xbbq" event={"ID":"51903662-2d95-48d2-b713-8ae2f2885e8b","Type":"ContainerStarted","Data":"1f8e508178d201e2a79961519f96d410e2d64e7acdeea123c9359f461dafc20e"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.197685 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" podStartSLOduration=118.197659941 podStartE2EDuration="1m58.197659941s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.196714413 +0000 UTC m=+145.355915548" watchObservedRunningTime="2026-01-26 00:11:54.197659941 +0000 UTC m=+145.356861066" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.206972 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.207768 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.707720554 +0000 UTC m=+145.866921689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.227414 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" event={"ID":"3bd78e9f-18ce-4592-866f-029d883e2d95","Type":"ContainerStarted","Data":"30f9a43dfff1437b43218b7e3c78646e4851f431af51ea47dc53399e502e4905"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.277064 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-mgsgw" podStartSLOduration=118.27703985 podStartE2EDuration="1m58.27703985s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.274955527 +0000 UTC m=+145.434156652" watchObservedRunningTime="2026-01-26 00:11:54.27703985 +0000 UTC m=+145.436240985" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.282844 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" event={"ID":"644c98f5-22e8-4e28-8d95-427acc12569c","Type":"ContainerStarted","Data":"2c33dfe689efbf944151d003a9c7897924deb95cad8f714db3e2580de2a2e979"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.287647 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" event={"ID":"6e039297-dc55-4c6b-b76e-d2b83365ca3d","Type":"ContainerStarted","Data":"da1e62403df27c09950e2c57dfbfdaf84d39bc76307c980e172156b2fdb759f2"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.310961 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" event={"ID":"1d8242bd-da35-455c-b000-06d3298c3d1d","Type":"ContainerStarted","Data":"c72b5b5c1a52e787c19691a473b6e2ab6402a8a79223624bd30f36742c9ddaf0"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.311520 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.316114 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.319222 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.819201489 +0000 UTC m=+145.978402614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.344822 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6xbbq" podStartSLOduration=10.344796489 podStartE2EDuration="10.344796489s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.309672902 +0000 UTC m=+145.468874047" watchObservedRunningTime="2026-01-26 00:11:54.344796489 +0000 UTC m=+145.503997614" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.346871 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d"] Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.348631 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.357997 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p5bxm" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.398531 5121 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ztcvg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.398626 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" podUID="1d8242bd-da35-455c-b000-06d3298c3d1d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.417994 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.418298 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" event={"ID":"51aff718-c15d-4232-8ba2-db2b79dc020a","Type":"ContainerStarted","Data":"9611bfad302791e9c1b9922f13cd363585adc34735596133b2027a051aa7fdd0"} Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.419239 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:54.918999182 +0000 UTC m=+146.078200307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.425618 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" podStartSLOduration=118.42558623 podStartE2EDuration="1m58.42558623s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.387587997 +0000 UTC m=+145.546789122" watchObservedRunningTime="2026-01-26 00:11:54.42558623 +0000 UTC m=+145.584787365" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.483417 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" podStartSLOduration=117.48338736 podStartE2EDuration="1m57.48338736s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.437288442 +0000 UTC m=+145.596489567" watchObservedRunningTime="2026-01-26 00:11:54.48338736 +0000 UTC m=+145.642588485" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.531630 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.532480 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.032465607 +0000 UTC m=+146.191666732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.614048 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-xzxxt" podStartSLOduration=118.614024621 podStartE2EDuration="1m58.614024621s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.590098361 +0000 UTC m=+145.749299486" watchObservedRunningTime="2026-01-26 00:11:54.614024621 +0000 UTC m=+145.773225756" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.628398 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" event={"ID":"87353f19-deb2-41e6-bff6-3e2bb861ce33","Type":"ContainerStarted","Data":"42892ef3f9513ec52d1a4cbb7e26ba17de905a7e3adf342a43a37996eb6a7773"} Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.632040 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.633966 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.635745 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.135688033 +0000 UTC m=+146.294889168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.636310 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.636902 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.688151 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-4whj5" podStartSLOduration=118.688121261 podStartE2EDuration="1m58.688121261s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.677013096 +0000 UTC m=+145.836214241" watchObservedRunningTime="2026-01-26 00:11:54.688121261 +0000 UTC m=+145.847322386" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.738297 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podStartSLOduration=118.7382786 podStartE2EDuration="1m58.7382786s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.737589639 +0000 UTC m=+145.896790784" watchObservedRunningTime="2026-01-26 00:11:54.7382786 +0000 UTC m=+145.897479735" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.738726 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.742501 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.242482467 +0000 UTC m=+146.401683592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.770070 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" podStartSLOduration=118.770044966 podStartE2EDuration="1m58.770044966s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:54.768146779 +0000 UTC m=+145.927347924" watchObservedRunningTime="2026-01-26 00:11:54.770044966 +0000 UTC m=+145.929246091" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.839540 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.840186 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.340163156 +0000 UTC m=+146.499364291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.882875 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41136: no serving certificate available for the kubelet" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.898814 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:11:54 crc kubenswrapper[5121]: I0126 00:11:54.988060 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:54 crc kubenswrapper[5121]: E0126 00:11:54.989641 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.489622674 +0000 UTC m=+146.648823869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.043880 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.048902 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.070312 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.070593 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.090346 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.090820 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.090852 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.090959 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.590934662 +0000 UTC m=+146.750135787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.105993 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.146732 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.192052 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.192272 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.192333 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.193616 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.693596111 +0000 UTC m=+146.852797236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.193839 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.243171 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.244479 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.244537 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.284213 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.296850 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.297674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.797654153 +0000 UTC m=+146.956855278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.405550 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.406033 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:55.906006933 +0000 UTC m=+147.065208058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.420541 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.454132 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-lhdjv"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.460044 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.515215 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.515693 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.015673924 +0000 UTC m=+147.174875049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.526937 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.618074 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.618629 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.118602451 +0000 UTC m=+147.277803576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.645165 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" event={"ID":"2019e529-0498-4aa1-b3f9-65c63707d280","Type":"ContainerStarted","Data":"e8cf09da53c987f32f4c769068acaa2619c423b1fd93b33bae866f5d70fb9db4"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.649190 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" event={"ID":"25b4983a-dbb4-499e-9b78-ef637f425116","Type":"ContainerStarted","Data":"5b8e68c09e6a1083e248a495bdde0b52d5ce2949be44430a8bc0d7fb4ae7f48c"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.653390 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" event={"ID":"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b","Type":"ContainerStarted","Data":"b0039149f25f9fe3c5f31af73e9257b619bd3b3ff4d44ead5f5c9ed19fa8c4f8"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.654341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" event={"ID":"91c2eb8f-4a83-425b-b2f3-2b034728d8f1","Type":"ContainerStarted","Data":"0e4554082c8fb46380c16f875e59de52922668eb59efcac5ffb19f08ece13994"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.658074 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" event={"ID":"67f7f3b0-5f2e-4242-be97-3e765a5ea9e0","Type":"ContainerStarted","Data":"678006e6a3ed3803fc602f69d0966bda48c45d7c6806b6c42a98df50141bb4af"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.660823 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" event={"ID":"d013c3f9-0e7e-4b67-9fd0-6f9e14c64287","Type":"ContainerStarted","Data":"d42eb3a14df0b6131ba33a9ed81c5b0ac86f204b58ddee0d1f0fceb379aaf29e"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.663335 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" event={"ID":"946bd7f5-92cd-435d-9ff8-72af506917be","Type":"ContainerStarted","Data":"6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.663805 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.687973 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-mh8jv"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.706623 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" event={"ID":"644c98f5-22e8-4e28-8d95-427acc12569c","Type":"ContainerStarted","Data":"7e9fa30af507d94de6e11eb2db5599d02ab90bb2d13903696fb5358eab6319a9"} Jan 26 00:11:55 crc kubenswrapper[5121]: W0126 00:11:55.709329 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca0eaab_8776_4e08_811e_cb35fbe8f6a2.slice/crio-3ccbf113c4bd873ec396e0c121367f090fd7f0613e63c534c1d0aa34f5724dec WatchSource:0}: Error finding container 3ccbf113c4bd873ec396e0c121367f090fd7f0613e63c534c1d0aa34f5724dec: Status 404 returned error can't find the container with id 3ccbf113c4bd873ec396e0c121367f090fd7f0613e63c534c1d0aa34f5724dec Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.718815 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" event={"ID":"6e039297-dc55-4c6b-b76e-d2b83365ca3d","Type":"ContainerStarted","Data":"dbe824cc54eacd6c580a9c8e4d536cec0d4be8c4f49c1e480fd9c457a6b7848c"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.719464 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.719711 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.219679513 +0000 UTC m=+147.378880638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.721825 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.722573 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.222553309 +0000 UTC m=+147.381754434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.749198 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" event={"ID":"cfaf2a6d-872e-498c-bffd-089932c74e19","Type":"ContainerStarted","Data":"4f69b9c6f1156b8a2f7ff5b44d529442f965eed71112df0e8d727d35ee2424cf"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.749325 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.755039 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-66dzp" podStartSLOduration=119.755017266 podStartE2EDuration="1m59.755017266s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.70794545 +0000 UTC m=+146.867146585" watchObservedRunningTime="2026-01-26 00:11:55.755017266 +0000 UTC m=+146.914218391" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.758728 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-48wqr" event={"ID":"18a087c3-ca43-45fb-bacd-4689a2362ac0","Type":"ContainerStarted","Data":"ac1ef6c9fb45c0d0c5876b99e6cc2e6c11f61c492b8bb652f4ebfd68c384b00b"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.770147 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.796233 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" event={"ID":"e71a821d-2797-4bf9-96d3-d9a384e336e1","Type":"ContainerStarted","Data":"561b6ba1f4904dc5b274d5fa707ef95d6a2d02d2b439f568767f1bf8aa96a6de"} Jan 26 00:11:55 crc kubenswrapper[5121]: W0126 00:11:55.797357 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c75b2fc_a93e_44bd_9070_7512402f3f71.slice/crio-26e3af89546a142e1d0ff614db98d012cdbd2f1bcd7dc317136a897852a1ff7e WatchSource:0}: Error finding container 26e3af89546a142e1d0ff614db98d012cdbd2f1bcd7dc317136a897852a1ff7e: Status 404 returned error can't find the container with id 26e3af89546a142e1d0ff614db98d012cdbd2f1bcd7dc317136a897852a1ff7e Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.798629 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" event={"ID":"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee","Type":"ContainerStarted","Data":"edb7cb0a6e4c5b80d9103e2acf6b71b19379ca0eb19fdbe67ce9697304b1a1ae"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.798779 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-94msz" podStartSLOduration=119.798738262 podStartE2EDuration="1m59.798738262s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.771397939 +0000 UTC m=+146.930599084" watchObservedRunningTime="2026-01-26 00:11:55.798738262 +0000 UTC m=+146.957939397" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.798898 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podStartSLOduration=11.798891857 podStartE2EDuration="11.798891857s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:55.796256507 +0000 UTC m=+146.955457632" watchObservedRunningTime="2026-01-26 00:11:55.798891857 +0000 UTC m=+146.958092982" Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.818840 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" event={"ID":"e93d8f25-2b5b-4f00-a6a7-bc1ee0690800","Type":"ContainerStarted","Data":"c372c343310b5b5587eb4c5734f02ce2106039b51e07f3caf113f15e517856ae"} Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.823561 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.825037 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.325013883 +0000 UTC m=+147.484215008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:55 crc kubenswrapper[5121]: I0126 00:11:55.925798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:55 crc kubenswrapper[5121]: E0126 00:11:55.926488 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.426458856 +0000 UTC m=+147.585659981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.028261 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.028487 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.528429894 +0000 UTC m=+147.687631039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.028891 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.029548 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.529529747 +0000 UTC m=+147.688730942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.085251 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p5bxm"] Jan 26 00:11:56 crc kubenswrapper[5121]: W0126 00:11:56.092281 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf3bac84_ca0c_4b27_a190_a808916babea.slice/crio-b202b0ffa086fe7b5caaf2ccb32e1f6eea5d475a97e466d097c98b661d3e43b1 WatchSource:0}: Error finding container b202b0ffa086fe7b5caaf2ccb32e1f6eea5d475a97e466d097c98b661d3e43b1: Status 404 returned error can't find the container with id b202b0ffa086fe7b5caaf2ccb32e1f6eea5d475a97e466d097c98b661d3e43b1 Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.130461 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.130925 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.630864247 +0000 UTC m=+147.790065382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.215109 5121 ???:1] "http: TLS handshake error from 192.168.126.11:41144: no serving certificate available for the kubelet" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.233486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.234188 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.734164045 +0000 UTC m=+147.893365170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.237344 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hs67g"] Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.248060 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:56 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:11:56 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:11:56 crc kubenswrapper[5121]: healthz check failed Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.248126 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.347560 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.347953 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.847935219 +0000 UTC m=+148.007136344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: W0126 00:11:56.461406 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6aeb8de7_b6c5_4617_8139_93af186b1adc.slice/crio-3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048 WatchSource:0}: Error finding container 3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048: Status 404 returned error can't find the container with id 3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048 Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.462444 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.462823 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:56.962808166 +0000 UTC m=+148.122009291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.478386 5121 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-ljq2k container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.478498 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" podUID="9da119f5-ef9e-41d0-adef-a5e261563611" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.482602 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.536057 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.536173 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.538035 5121 patch_prober.go:28] interesting pod/console-64d44f6ddf-g5dxr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.538160 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-g5dxr" podUID="85c879f7-5fe1-44b3-94ca-dd368a14be73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.564004 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.564328 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.06428971 +0000 UTC m=+148.223490835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.565102 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.565484 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.065465035 +0000 UTC m=+148.224666160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.666672 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.666923 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.166881587 +0000 UTC m=+148.326082722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.717056 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhklg"] Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.768788 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.769121 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.269107584 +0000 UTC m=+148.428308709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.825441 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerStarted","Data":"26e3af89546a142e1d0ff614db98d012cdbd2f1bcd7dc317136a897852a1ff7e"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.827435 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" event={"ID":"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2","Type":"ContainerStarted","Data":"3ccbf113c4bd873ec396e0c121367f090fd7f0613e63c534c1d0aa34f5724dec"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.829467 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6aeb8de7-b6c5-4617-8139-93af186b1adc","Type":"ContainerStarted","Data":"3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.830563 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-mh8jv" event={"ID":"420ea536-e22c-4ded-972a-3fe1ad5bc1ce","Type":"ContainerStarted","Data":"5710bbb26daffa7d7626eef97fa550e12e7446a3d09980ed509c2fd38f775b38"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.832518 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p5bxm" event={"ID":"df3bac84-ca0c-4b27-a190-a808916babea","Type":"ContainerStarted","Data":"b202b0ffa086fe7b5caaf2ccb32e1f6eea5d475a97e466d097c98b661d3e43b1"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.833733 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" event={"ID":"197eb808-9411-4b4c-b882-85f9c3479dae","Type":"ContainerStarted","Data":"d6c06769eadc2b9fa350e3b707807f0023d89fabc65af83cce45725a84ca4c32"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.834773 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" event={"ID":"a5d73c73-c20a-43a6-b318-bfe8557d4dbb","Type":"ContainerStarted","Data":"2877a992a764adb4c3810f136348244e987e287585bbc263dcd1b134e21a6c17"} Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.934504 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.934838 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.43479016 +0000 UTC m=+148.593991285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:56 crc kubenswrapper[5121]: I0126 00:11:56.935639 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:56 crc kubenswrapper[5121]: E0126 00:11:56.936287 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.436267384 +0000 UTC m=+148.595468509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.088674 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.088939 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.588895797 +0000 UTC m=+148.748096922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.089728 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.090263 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.590241488 +0000 UTC m=+148.749442623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.191467 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.191667 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.691643939 +0000 UTC m=+148.850845064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.191965 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.192457 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.692434473 +0000 UTC m=+148.851635608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.238348 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:57 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:11:57 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:11:57 crc kubenswrapper[5121]: healthz check failed Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.238442 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.293367 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.293620 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.793580387 +0000 UTC m=+148.952781522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.293899 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.294339 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.794317649 +0000 UTC m=+148.953518784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.397676 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.398109 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:57.898089152 +0000 UTC m=+149.057290287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.419333 5121 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-ljq2k container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.419403 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" podUID="9da119f5-ef9e-41d0-adef-a5e261563611" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.429583 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-rv4fb" podStartSLOduration=121.429562009 podStartE2EDuration="2m1.429562009s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:57.428806176 +0000 UTC m=+148.588007331" watchObservedRunningTime="2026-01-26 00:11:57.429562009 +0000 UTC m=+148.588763134" Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.476976 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.477082 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.494235 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" podStartSLOduration=121.494214925 podStartE2EDuration="2m1.494214925s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:57.492372369 +0000 UTC m=+148.651573514" watchObservedRunningTime="2026-01-26 00:11:57.494214925 +0000 UTC m=+148.653416050" Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.506472 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.509350 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.009331529 +0000 UTC m=+149.168532774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.623467 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.624439 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.124409973 +0000 UTC m=+149.283611098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.813164 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.813714 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.313699049 +0000 UTC m=+149.472900174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.911061 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-48wqr" event={"ID":"18a087c3-ca43-45fb-bacd-4689a2362ac0","Type":"ContainerStarted","Data":"d5a74e342d905b6e0e811949b15fd7fe9b5f2fb3cb3839e01c531a2caf3e958f"} Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.914017 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:57 crc kubenswrapper[5121]: E0126 00:11:57.914487 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.414465351 +0000 UTC m=+149.573666476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.939957 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" event={"ID":"2019e529-0498-4aa1-b3f9-65c63707d280","Type":"ContainerStarted","Data":"67846d259f549b83071e98a73f11e3b9b24db8a8aa7c012448b0785ab5c5807c"} Jan 26 00:11:57 crc kubenswrapper[5121]: I0126 00:11:57.940006 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.016440 5121 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-ldf8d container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.016535 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" podUID="2019e529-0498-4aa1-b3f9-65c63707d280" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.021044 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.023912 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.523879434 +0000 UTC m=+149.683080729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.025450 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" event={"ID":"a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b","Type":"ContainerStarted","Data":"a3cff0257f0839bedbc8cccdaa019709833e5d48558c13c7de70f83c76a45249"} Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.026659 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.047983 5121 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-r5x7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.048053 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" podUID="a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.051749 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8flxd" podStartSLOduration=121.051730912 podStartE2EDuration="2m1.051730912s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:57.530865378 +0000 UTC m=+148.690066503" watchObservedRunningTime="2026-01-26 00:11:58.051730912 +0000 UTC m=+149.210932037" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.080599 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" event={"ID":"91c2eb8f-4a83-425b-b2f3-2b034728d8f1","Type":"ContainerStarted","Data":"3d1d00f82b08a974d394071ee958de48c01851fef58a47eeb07482f8bf9f759c"} Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.091690 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" event={"ID":"25b4983a-dbb4-499e-9b78-ef637f425116","Type":"ContainerStarted","Data":"9c22aaec94a0c4e27ad81bf58c7cdbb9c291e66d1b7b36936313f30fa78cf4bd"} Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.092153 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" gracePeriod=30 Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.101434 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-48wqr" podStartSLOduration=14.101408747 podStartE2EDuration="14.101408747s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.054597198 +0000 UTC m=+149.213798323" watchObservedRunningTime="2026-01-26 00:11:58.101408747 +0000 UTC m=+149.260609872" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.103463 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" podStartSLOduration=121.103449879 podStartE2EDuration="2m1.103449879s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.083710335 +0000 UTC m=+149.242911480" watchObservedRunningTime="2026-01-26 00:11:58.103449879 +0000 UTC m=+149.262651004" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.125422 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" podStartSLOduration=121.125402269 podStartE2EDuration="2m1.125402269s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.111912973 +0000 UTC m=+149.271114118" watchObservedRunningTime="2026-01-26 00:11:58.125402269 +0000 UTC m=+149.284603394" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.129068 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.131416 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.631389669 +0000 UTC m=+149.790590794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.134687 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.137238 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.637219555 +0000 UTC m=+149.796420670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.200471 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" podStartSLOduration=122.200444637 podStartE2EDuration="2m2.200444637s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.169837656 +0000 UTC m=+149.329038791" watchObservedRunningTime="2026-01-26 00:11:58.200444637 +0000 UTC m=+149.359645782" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.238435 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.238905 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.738881644 +0000 UTC m=+149.898082769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.253357 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:58 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:11:58 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:11:58 crc kubenswrapper[5121]: healthz check failed Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.253431 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.341016 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.341381 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.841364488 +0000 UTC m=+150.000565613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.346910 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ztcvg" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.379028 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w75l2" podStartSLOduration=122.379010101 podStartE2EDuration="2m2.379010101s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:58.199720106 +0000 UTC m=+149.358921231" watchObservedRunningTime="2026-01-26 00:11:58.379010101 +0000 UTC m=+149.538211226" Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.444461 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.445150 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.945123111 +0000 UTC m=+150.104324236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.445251 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.446513 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:58.946501302 +0000 UTC m=+150.105702517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.547661 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.548307 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.048284945 +0000 UTC m=+150.207486070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.697111 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.697463 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.197451053 +0000 UTC m=+150.356652178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.822620 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.823010 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.322989911 +0000 UTC m=+150.482191036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.984647 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:58 crc kubenswrapper[5121]: E0126 00:11:58.985166 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.485154521 +0000 UTC m=+150.644355646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:58 crc kubenswrapper[5121]: I0126 00:11:58.991960 5121 ???:1] "http: TLS handshake error from 192.168.126.11:47222: no serving certificate available for the kubelet" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.086645 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.087088 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.587064878 +0000 UTC m=+150.746266003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.090868 5121 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-ljq2k container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.090938 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" podUID="9da119f5-ef9e-41d0-adef-a5e261563611" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.138471 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" event={"ID":"a5d73c73-c20a-43a6-b318-bfe8557d4dbb","Type":"ContainerStarted","Data":"092484408c8b7b72b6333e2df4fbbfd0b69210dc6910abfb3626dfedf6e6c136"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.149340 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerStarted","Data":"ced3b461a50436368935d9b9ef9c293d0eb80a3a47e55938bf8d8741f81d8d7c"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.164679 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" event={"ID":"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2","Type":"ContainerStarted","Data":"b086e95557b220f0248472d7db0daf1eeed2c3c704c98cc53a363b7025203dad"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.174445 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" event={"ID":"25b4983a-dbb4-499e-9b78-ef637f425116","Type":"ContainerStarted","Data":"21d0ad09dcf0c77ca76bd1adb58534b6236280bb3e632fabf627268d71d2618c"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.192260 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-mh8jv" event={"ID":"420ea536-e22c-4ded-972a-3fe1ad5bc1ce","Type":"ContainerStarted","Data":"badc705289f4663f905b970be2e05b457b7429c240be2e68160ee0e6e6543eb8"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.192374 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.193049 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.693030567 +0000 UTC m=+150.852231692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.199116 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" event={"ID":"e71a821d-2797-4bf9-96d3-d9a384e336e1","Type":"ContainerStarted","Data":"9b2c696d14a5830502884122b3c2934f03b41afa42f04605833ce858ba951955"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.201753 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" event={"ID":"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee","Type":"ContainerStarted","Data":"70e09778d157ab55ccc8488b5c0d6966b0da3cdd9561d44de41d9648b931108b"} Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.261071 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:11:59 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:11:59 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:11:59 crc kubenswrapper[5121]: healthz check failed Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.261168 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.294003 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.294209 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.79416619 +0000 UTC m=+150.953367315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.294704 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.295158 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.795132279 +0000 UTC m=+150.954333424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.396060 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.396365 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.896344825 +0000 UTC m=+151.055545950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.498287 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.498602 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:11:59.998590052 +0000 UTC m=+151.157791177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.549110 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.549185 5121 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-r5x7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.549216 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" podUID="a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.575189 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.575281 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.586331 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podStartSLOduration=123.586310062 podStartE2EDuration="2m3.586310062s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:59.586235259 +0000 UTC m=+150.745436394" watchObservedRunningTime="2026-01-26 00:11:59.586310062 +0000 UTC m=+150.745511187" Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.600585 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.600871 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.100835189 +0000 UTC m=+151.260036314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.601346 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.601716 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.101696565 +0000 UTC m=+151.260897750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.703222 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.703338 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.203316003 +0000 UTC m=+151.362517128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.703608 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.704045 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.204027644 +0000 UTC m=+151.363228769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.804853 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.804961 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.304942241 +0000 UTC m=+151.464143366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.805346 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.805700 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.305688874 +0000 UTC m=+151.464889999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:11:59 crc kubenswrapper[5121]: I0126 00:11:59.908189 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:11:59 crc kubenswrapper[5121]: E0126 00:11:59.908657 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.408632372 +0000 UTC m=+151.567833497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.010309 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.010654 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.510638851 +0000 UTC m=+151.669839976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.111596 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.111784 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.611740554 +0000 UTC m=+151.770941679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.112387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.112731 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.612719563 +0000 UTC m=+151.771920688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.213446 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.213674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.71364522 +0000 UTC m=+151.872846345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.214001 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.214383 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.714366792 +0000 UTC m=+151.873567927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.225910 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p5bxm" event={"ID":"df3bac84-ca0c-4b27-a190-a808916babea","Type":"ContainerStarted","Data":"ef87cebe23022b538688807ae8d9c668e7946dda23cec5a48e9b37a93c829526"} Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.293277 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:00 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:00 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:00 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.293696 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.339469 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.839438566 +0000 UTC m=+151.998639691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.339508 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.340056 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.340491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.840476557 +0000 UTC m=+151.999677682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.441534 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.442239 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:00.942190508 +0000 UTC m=+152.101391633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.544154 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.544618 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.04459809 +0000 UTC m=+152.203799215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.600903 5121 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-r5x7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.600976 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" podUID="a2e0ce4f-8f7e-42be-b9fd-e8e63bbfe74b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.601707 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.601829 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.646655 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.647705 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.147678542 +0000 UTC m=+152.306879667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.750640 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.751182 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.251163296 +0000 UTC m=+152.410364431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.856302 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.856772 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.356739203 +0000 UTC m=+152.515940328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.891942 5121 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-ldf8d container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.892019 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" podUID="2019e529-0498-4aa1-b3f9-65c63707d280" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.932422 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-mh8jv" podStartSLOduration=123.93240138 podStartE2EDuration="2m3.93240138s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:00.931095421 +0000 UTC m=+152.090296556" watchObservedRunningTime="2026-01-26 00:12:00.93240138 +0000 UTC m=+152.091602505" Jan 26 00:12:00 crc kubenswrapper[5121]: I0126 00:12:00.958418 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:00 crc kubenswrapper[5121]: E0126 00:12:00.958972 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.458948329 +0000 UTC m=+152.618149454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.060489 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.060839 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.560803504 +0000 UTC m=+152.720004639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.060994 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.061538 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.561514616 +0000 UTC m=+152.720715751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.094504 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-ljq2k" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.174965 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.175180 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.675150035 +0000 UTC m=+152.834351160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.175592 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.176067 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.676051012 +0000 UTC m=+152.835252147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.209885 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-5r8lz" podStartSLOduration=125.20986355 podStartE2EDuration="2m5.20986355s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:00.992260652 +0000 UTC m=+152.151461777" watchObservedRunningTime="2026-01-26 00:12:01.20986355 +0000 UTC m=+152.369064675" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.234107 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.240822 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:01 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.240886 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.262114 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" event={"ID":"1ca0eaab-8776-4e08-811e-cb35fbe8f6a2","Type":"ContainerStarted","Data":"93055cd0f1e347057aed4b733ee1dd768bf92d4ae2fd1adfdf049ccf6d1c35db"} Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.276504 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.278075 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.778052602 +0000 UTC m=+152.937253727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.286265 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6aeb8de7-b6c5-4617-8139-93af186b1adc","Type":"ContainerStarted","Data":"cbb8352550f50808c91559726dafbdc945a5b54cf5ae290737a1bd95bb76d9ce"} Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.287306 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.290864 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.290928 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.351061 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-l7fcz" podStartSLOduration=125.351037448 podStartE2EDuration="2m5.351037448s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:01.348631506 +0000 UTC m=+152.507832631" watchObservedRunningTime="2026-01-26 00:12:01.351037448 +0000 UTC m=+152.510238573" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.399861 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=6.3998415269999995 podStartE2EDuration="6.399841527s" podCreationTimestamp="2026-01-26 00:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:01.398770635 +0000 UTC m=+152.557971770" watchObservedRunningTime="2026-01-26 00:12:01.399841527 +0000 UTC m=+152.559042652" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.400092 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.401479 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:01.901460536 +0000 UTC m=+153.060661741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.503454 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" podStartSLOduration=124.503428285 podStartE2EDuration="2m4.503428285s" podCreationTimestamp="2026-01-26 00:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:01.500024972 +0000 UTC m=+152.659226097" watchObservedRunningTime="2026-01-26 00:12:01.503428285 +0000 UTC m=+152.662629410" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.505555 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.506776 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.006739044 +0000 UTC m=+153.165940169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.607868 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.608297 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.10827466 +0000 UTC m=+153.267475785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.712488 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.712701 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.212646151 +0000 UTC m=+153.371847276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.713190 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.713706 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.213688002 +0000 UTC m=+153.372889127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.814883 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.815445 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.315416084 +0000 UTC m=+153.474617209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.899291 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.899724 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.919075 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:01 crc kubenswrapper[5121]: E0126 00:12:01.919476 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.419457474 +0000 UTC m=+153.578658599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.927686 5121 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-prnb4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]log ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]etcd ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/max-in-flight-filter ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 00:12:01 crc kubenswrapper[5121]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5121]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/openshift.io-startinformers ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 00:12:01 crc kubenswrapper[5121]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 00:12:01 crc kubenswrapper[5121]: livez check failed Jan 26 00:12:01 crc kubenswrapper[5121]: I0126 00:12:01.927805 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" podUID="6e039297-dc55-4c6b-b76e-d2b83365ca3d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.043323 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.044974 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.544938561 +0000 UTC m=+153.704139696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.144956 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.145470 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.645448415 +0000 UTC m=+153.804649540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.239738 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:02 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:02 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:02 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.239820 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.245639 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.245863 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.745842305 +0000 UTC m=+153.905043430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.246004 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.246301 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.746294159 +0000 UTC m=+153.905495284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.307817 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" event={"ID":"eff4a82d-18f4-4f97-8b86-0eb0ffdf20ee","Type":"ContainerStarted","Data":"e8808d4b9b663fe0a8a826ceb27241299ca4c9a3d93afcba8b4eb1edf777aa31"} Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.311525 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p5bxm" event={"ID":"df3bac84-ca0c-4b27-a190-a808916babea","Type":"ContainerStarted","Data":"89a5340fbff1ad1c977f1121e3ed37365ed6c12253ad25edcb61e7b7ea1dd92c"} Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.314035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" event={"ID":"a5d73c73-c20a-43a6-b318-bfe8557d4dbb","Type":"ContainerStarted","Data":"d8f8b52057b552f656c225682c4a7114ad4793080808b1d8f0af7098cccc192c"} Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.349240 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.350377 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.85035272 +0000 UTC m=+154.009553845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.438792 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.438851 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.451036 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.451590 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:02.951571257 +0000 UTC m=+154.110772382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.483780 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-lhdjv" podStartSLOduration=126.483740365 podStartE2EDuration="2m6.483740365s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:02.456929208 +0000 UTC m=+153.616130343" watchObservedRunningTime="2026-01-26 00:12:02.483740365 +0000 UTC m=+153.642941490" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.553500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.553819 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.053798403 +0000 UTC m=+154.212999528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.554253 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.554786 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.054778142 +0000 UTC m=+154.213979267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.655827 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.656848 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.156818233 +0000 UTC m=+154.316019358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.757606 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.757658 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.757739 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.758086 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.25807298 +0000 UTC m=+154.417274105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.766411 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.797702 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nsl8g" podStartSLOduration=126.797677602 podStartE2EDuration="2m6.797677602s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:02.489112266 +0000 UTC m=+153.648313411" watchObservedRunningTime="2026-01-26 00:12:02.797677602 +0000 UTC m=+153.956878727" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.798911 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.859137 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.859589 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.859747 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.860822 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.360782651 +0000 UTC m=+154.519983806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.865097 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.871687 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.881652 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.889615 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.918779 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:02 crc kubenswrapper[5121]: I0126 00:12:02.961725 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:02 crc kubenswrapper[5121]: E0126 00:12:02.962274 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.462254045 +0000 UTC m=+154.621455170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.062880 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.063080 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.563032197 +0000 UTC m=+154.722233322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.063267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.063642 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.563628425 +0000 UTC m=+154.722829550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.169843 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.170042 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.670008187 +0000 UTC m=+154.829209312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.196652 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.236997 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:03 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:03 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.237489 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.252160 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.252216 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.271374 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.271658 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.771645555 +0000 UTC m=+154.930846680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.372878 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.374063 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.874024036 +0000 UTC m=+155.033225161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.474656 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.475040 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:03.975020586 +0000 UTC m=+155.134221721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.578209 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.578536 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.07851767 +0000 UTC m=+155.237718795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.679318 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.679842 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.179738836 +0000 UTC m=+155.338939961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.780471 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.781061 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.281018154 +0000 UTC m=+155.440219279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.884493 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.884957 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.384940761 +0000 UTC m=+155.544141886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:03 crc kubenswrapper[5121]: I0126 00:12:03.990535 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:03 crc kubenswrapper[5121]: E0126 00:12:03.991412 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.491380215 +0000 UTC m=+155.650581340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.088899 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.088963 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.092966 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.093471 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.593453336 +0000 UTC m=+155.752654461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.155131 5121 ???:1] "http: TLS handshake error from 192.168.126.11:47228: no serving certificate available for the kubelet" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.194371 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.194560 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.694507177 +0000 UTC m=+155.853708302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.194971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.195440 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.695429555 +0000 UTC m=+155.854630690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.237467 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:04 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:04 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:04 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.237550 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.296160 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.296566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.296831 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.796801316 +0000 UTC m=+155.956002451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.298407 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.298536 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-nhmff" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.298587 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.298599 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.299137 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.301229 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.309219 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2c23c20-cf98-42ae-b5fb-5bbde2b0740c-metrics-certs\") pod \"network-metrics-daemon-2st6h\" (UID: \"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c\") " pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.346123 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.346162 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.347254 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.358894 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.364239 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.373132 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.373183 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.373205 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.398650 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.398731 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.398811 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4nk\" (UniqueName: \"kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.399067 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.399828 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:04.899808466 +0000 UTC m=+156.059009591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.403069 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-579cz" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.403128 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.403274 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.405695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"0f1ca077531496fc59974ed1bdf9104e9c5ea66e36544601cb83815b9379ad06"} Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.407250 5121 generic.go:358] "Generic (PLEG): container finished" podID="6aeb8de7-b6c5-4617-8139-93af186b1adc" containerID="cbb8352550f50808c91559726dafbdc945a5b54cf5ae290737a1bd95bb76d9ce" exitCode=0 Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.407341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6aeb8de7-b6c5-4617-8139-93af186b1adc","Type":"ContainerDied","Data":"cbb8352550f50808c91559726dafbdc945a5b54cf5ae290737a1bd95bb76d9ce"} Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.412732 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b4a7b659f8d58e1beee24455b98814be8269303deeb2ca0d86e258274c490c17"} Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.417835 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" event={"ID":"197eb808-9411-4b4c-b882-85f9c3479dae","Type":"ContainerStarted","Data":"10e29cd0c78193fb9f757bf993f8e8274d370bde0bbabf49639f987d5945d346"} Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.420065 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"ef89a1828068af2901b818472071a5f0757474009aaeadbe66f5955aab6caf4d"} Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.420100 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-p5bxm" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.489166 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2st6h" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.500849 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501067 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501158 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501206 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tff7q\" (UniqueName: \"kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501228 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4nk\" (UniqueName: \"kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501256 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rkv\" (UniqueName: \"kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501275 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501355 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501370 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mfs\" (UniqueName: \"kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.501398 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.502126 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.002107694 +0000 UTC m=+156.161308819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.503325 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.504269 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.543341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4nk\" (UniqueName: \"kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk\") pod \"community-operators-4p4cc\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.544093 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p5bxm" podStartSLOduration=20.544076997 podStartE2EDuration="20.544076997s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:04.54249732 +0000 UTC m=+155.701698445" watchObservedRunningTime="2026-01-26 00:12:04.544076997 +0000 UTC m=+155.703278142" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605126 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52mfs\" (UniqueName: \"kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605264 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605303 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605348 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605365 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tff7q\" (UniqueName: \"kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605397 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-88rkv\" (UniqueName: \"kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605422 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605467 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.605506 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.606007 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.606294 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.607921 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.608193 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.108175096 +0000 UTC m=+156.267376221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.608841 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.609498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.609728 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.640503 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.653429 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tff7q\" (UniqueName: \"kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q\") pod \"certified-operators-dfhxk\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.657678 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rkv\" (UniqueName: \"kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv\") pod \"community-operators-v8cdp\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.664646 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mfs\" (UniqueName: \"kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs\") pod \"certified-operators-dmjdc\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.674480 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.682121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.708330 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.708639 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.208620919 +0000 UTC m=+156.367822044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.720117 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.732019 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.815130 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.817781 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.317728542 +0000 UTC m=+156.476929667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:04 crc kubenswrapper[5121]: I0126 00:12:04.916873 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:04 crc kubenswrapper[5121]: E0126 00:12:04.917178 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.417157294 +0000 UTC m=+156.576358419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.018094 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.018486 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.518473663 +0000 UTC m=+156.677674788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.119354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.119592 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.619574456 +0000 UTC m=+156.778775581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.229511 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.229831 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.230120 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.730105262 +0000 UTC m=+156.889306387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.257377 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.270951 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.281204 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:05 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:05 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:05 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.281280 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.305376 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.334344 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.334748 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.83472788 +0000 UTC m=+156.993929005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.440811 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.440899 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.441022 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhv9\" (UniqueName: \"kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.441062 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.441517 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:05.941496933 +0000 UTC m=+157.100698058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.446325 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"ec87b6fbeeacb0c311038ff9e33bcfc991c7b07026354f7dffb6f2d1afe2acd2"} Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.466055 5121 generic.go:358] "Generic (PLEG): container finished" podID="3bd78e9f-18ce-4592-866f-029d883e2d95" containerID="30f9a43dfff1437b43218b7e3c78646e4851f431af51ea47dc53399e502e4905" exitCode=0 Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.466125 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" event={"ID":"3bd78e9f-18ce-4592-866f-029d883e2d95","Type":"ContainerDied","Data":"30f9a43dfff1437b43218b7e3c78646e4851f431af51ea47dc53399e502e4905"} Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.480796 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"85ae3ed2a7920b09f78b7bde03d402e0176721214319d132f39d89834e68c064"} Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.511696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"7472a4538cc30e33033427a5a2086bac8387b75099ba5b163730f482d3552e55"} Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.512008 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.542217 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.542544 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhv9\" (UniqueName: \"kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.542606 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.542660 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.544584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.544729 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.044684829 +0000 UTC m=+157.203885954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.545044 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.556306 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2st6h"] Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.605136 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhv9\" (UniqueName: \"kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9\") pod \"redhat-marketplace-m7rfv\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.621118 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.634910 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.635520 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.652281 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.656533 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.156516674 +0000 UTC m=+157.315717799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.678176 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.702604 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.713544 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.715149 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.715208 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.754696 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.754879 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.754913 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kv6v\" (UniqueName: \"kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.754946 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.755106 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.255083159 +0000 UTC m=+157.414284284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.856677 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.856726 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8kv6v\" (UniqueName: \"kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.856755 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.856820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.857251 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.357230293 +0000 UTC m=+157.516431418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.857408 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.857457 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.913303 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.916789 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kv6v\" (UniqueName: \"kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v\") pod \"redhat-marketplace-zdxmp\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:05 crc kubenswrapper[5121]: I0126 00:12:05.958359 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:05 crc kubenswrapper[5121]: E0126 00:12:05.958674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.458651545 +0000 UTC m=+157.617852670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:05 crc kubenswrapper[5121]: W0126 00:12:05.961362 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77411de1_0221_4222_b0f1_33d1beba40ad.slice/crio-1a9b8862bf8f59c9acac8ddbdfe77fc5415956c86e7e08c47963771791fe58e5 WatchSource:0}: Error finding container 1a9b8862bf8f59c9acac8ddbdfe77fc5415956c86e7e08c47963771791fe58e5: Status 404 returned error can't find the container with id 1a9b8862bf8f59c9acac8ddbdfe77fc5415956c86e7e08c47963771791fe58e5 Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.000200 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.063706 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.064095 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.564078488 +0000 UTC m=+157.723279613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.165244 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.165658 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.665633544 +0000 UTC m=+157.824834669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.224741 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.255913 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:06 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:06 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:06 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.255976 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.268975 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.269334 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.769312664 +0000 UTC m=+157.928513789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.372583 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.373367 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.873344535 +0000 UTC m=+158.032545670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: W0126 00:12:06.384699 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod395eb036_2c83_4393_b3a7_d6b872cf9e4b.slice/crio-4a2247ecaaf87664a9bd1e8dcd913fc60f73cb13bd5fe86e48d1de509eb52d5b WatchSource:0}: Error finding container 4a2247ecaaf87664a9bd1e8dcd913fc60f73cb13bd5fe86e48d1de509eb52d5b: Status 404 returned error can't find the container with id 4a2247ecaaf87664a9bd1e8dcd913fc60f73cb13bd5fe86e48d1de509eb52d5b Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.474356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.474645 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:06.974632953 +0000 UTC m=+158.133834078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.480400 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.540696 5121 patch_prober.go:28] interesting pod/console-64d44f6ddf-g5dxr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.540806 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-g5dxr" podUID="85c879f7-5fe1-44b3-94ca-dd368a14be73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.574918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir\") pod \"6aeb8de7-b6c5-4617-8139-93af186b1adc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.574977 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access\") pod \"6aeb8de7-b6c5-4617-8139-93af186b1adc\" (UID: \"6aeb8de7-b6c5-4617-8139-93af186b1adc\") " Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.575065 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6aeb8de7-b6c5-4617-8139-93af186b1adc" (UID: "6aeb8de7-b6c5-4617-8139-93af186b1adc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.575139 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.575431 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aeb8de7-b6c5-4617-8139-93af186b1adc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.575522 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.075499798 +0000 UTC m=+158.234700923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.584977 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6aeb8de7-b6c5-4617-8139-93af186b1adc" (UID: "6aeb8de7-b6c5-4617-8139-93af186b1adc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.676797 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.676966 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aeb8de7-b6c5-4617-8139-93af186b1adc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.677374 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.177355753 +0000 UTC m=+158.336556878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.733035 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.733242 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.747743 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.780334 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.780656 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.780774 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.780898 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck79l\" (UniqueName: \"kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.781064 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.281034564 +0000 UTC m=+158.440235689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868609 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868653 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868668 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerStarted","Data":"5b89110e15cccd8ac628e5ffec8826f92945053c21ac8617ca3f3d479d51b659"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868694 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerStarted","Data":"f60f9098808efcb7c2b7cd69f9c923e55be6d362ba8dea09c19cee7e68492623"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868709 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerStarted","Data":"4a2247ecaaf87664a9bd1e8dcd913fc60f73cb13bd5fe86e48d1de509eb52d5b"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868723 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2st6h" event={"ID":"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c","Type":"ContainerStarted","Data":"da65218640625d9c20f5f3d18712a7ea6a5da17dbaf66c9eb47a6ca916459546"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868750 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"6aeb8de7-b6c5-4617-8139-93af186b1adc","Type":"ContainerDied","Data":"3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868791 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc5b1ff73c02a1c8c6540abec1a3fbfd965dcff1b4501c3284c3a5987b44048" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868805 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868818 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerStarted","Data":"1a9b8862bf8f59c9acac8ddbdfe77fc5415956c86e7e08c47963771791fe58e5"} Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868872 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.868924 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.870443 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6aeb8de7-b6c5-4617-8139-93af186b1adc" containerName="pruner" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.870487 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aeb8de7-b6c5-4617-8139-93af186b1adc" containerName="pruner" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.873869 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6aeb8de7-b6c5-4617-8139-93af186b1adc" containerName="pruner" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.927995 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.928137 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ck79l\" (UniqueName: \"kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.928284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.928350 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:06 crc kubenswrapper[5121]: E0126 00:12:06.928835 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.428821501 +0000 UTC m=+158.588022626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.929489 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: I0126 00:12:06.932841 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:06 crc kubenswrapper[5121]: W0126 00:12:06.983006 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a04527_f4f6_4570_8b32_08c2e4515c41.slice/crio-3e0e22474587b7ed5c428439b541258f31b776abf0c59ec7c5be6b32fd96deb3 WatchSource:0}: Error finding container 3e0e22474587b7ed5c428439b541258f31b776abf0c59ec7c5be6b32fd96deb3: Status 404 returned error can't find the container with id 3e0e22474587b7ed5c428439b541258f31b776abf0c59ec7c5be6b32fd96deb3 Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.001853 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck79l\" (UniqueName: \"kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l\") pod \"redhat-operators-hrfn9\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.029826 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.030221 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.530195852 +0000 UTC m=+158.689396977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.085504 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.131917 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.135201 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.135628 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.635605974 +0000 UTC m=+158.794807099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.186545 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.186598 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.186616 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.186635 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.187082 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.203547 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-prnb4" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.238120 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.238174 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume\") pod \"3bd78e9f-18ce-4592-866f-029d883e2d95\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.238259 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljll9\" (UniqueName: \"kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9\") pod \"3bd78e9f-18ce-4592-866f-029d883e2d95\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.238425 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume\") pod \"3bd78e9f-18ce-4592-866f-029d883e2d95\" (UID: \"3bd78e9f-18ce-4592-866f-029d883e2d95\") " Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.240078 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.740046557 +0000 UTC m=+158.899247682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.241361 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bd78e9f-18ce-4592-866f-029d883e2d95" (UID: "3bd78e9f-18ce-4592-866f-029d883e2d95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.258188 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:07 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:07 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:07 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.258303 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.281864 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3bd78e9f-18ce-4592-866f-029d883e2d95" (UID: "3bd78e9f-18ce-4592-866f-029d883e2d95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.286284 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9" (OuterVolumeSpecName: "kube-api-access-ljll9") pod "3bd78e9f-18ce-4592-866f-029d883e2d95" (UID: "3bd78e9f-18ce-4592-866f-029d883e2d95"). InnerVolumeSpecName "kube-api-access-ljll9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.343699 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344209 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344279 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344524 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9nk\" (UniqueName: \"kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344752 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bd78e9f-18ce-4592-866f-029d883e2d95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344804 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd78e9f-18ce-4592-866f-029d883e2d95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.344825 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljll9\" (UniqueName: \"kubernetes.io/projected/3bd78e9f-18ce-4592-866f-029d883e2d95-kube-api-access-ljll9\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.349298 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.849280964 +0000 UTC m=+159.008482089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.451475 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.451717 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.451747 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.451822 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9nk\" (UniqueName: \"kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.452236 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:07.952215122 +0000 UTC m=+159.111416237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.452993 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.458940 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.471780 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.472400 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bd78e9f-18ce-4592-866f-029d883e2d95" containerName="collect-profiles" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.472413 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd78e9f-18ce-4592-866f-029d883e2d95" containerName="collect-profiles" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.472501 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3bd78e9f-18ce-4592-866f-029d883e2d95" containerName="collect-profiles" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.478294 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.478352 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.499689 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9nk\" (UniqueName: \"kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk\") pod \"redhat-operators-88gft\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.505017 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.555399 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.555555 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.559680 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.560016 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.060000035 +0000 UTC m=+159.219201160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.565360 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.565558 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.587473 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" event={"ID":"3bd78e9f-18ce-4592-866f-029d883e2d95","Type":"ContainerDied","Data":"bd5730c26030ce728948e5c0fef46c54dcf663668446ea8b039117f8f91df8dd"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.587518 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5730c26030ce728948e5c0fef46c54dcf663668446ea8b039117f8f91df8dd" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.587609 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-zxr7b" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.660547 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.660718 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.160685105 +0000 UTC m=+159.319886230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.662532 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.662646 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.663738 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.163717007 +0000 UTC m=+159.322918132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.662816 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.674289 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerDied","Data":"3426ed528a702b5ef7b7100a4209abb4092c082cd45cefcecc0cde8db6221218"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.674240 5121 generic.go:358] "Generic (PLEG): container finished" podID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerID="3426ed528a702b5ef7b7100a4209abb4092c082cd45cefcecc0cde8db6221218" exitCode=0 Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.681082 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.713653 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerID="2e6e9a2057090b684b4d29ad44490a73a04d9bf56c9140f768603106cd0c626a" exitCode=0 Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.713724 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerDied","Data":"2e6e9a2057090b684b4d29ad44490a73a04d9bf56c9140f768603106cd0c626a"} Jan 26 00:12:07 crc kubenswrapper[5121]: W0126 00:12:07.743888 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a9d6686_1ae2_48c4_91f2_a41a12de699f.slice/crio-8e0ba689fca9fae08775016845af9af4b3d02fa15d8f9f1673b93c743148093b WatchSource:0}: Error finding container 8e0ba689fca9fae08775016845af9af4b3d02fa15d8f9f1673b93c743148093b: Status 404 returned error can't find the container with id 8e0ba689fca9fae08775016845af9af4b3d02fa15d8f9f1673b93c743148093b Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.755688 5121 generic.go:358] "Generic (PLEG): container finished" podID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerID="892c425b55d5562b5dd6102cc77f438e4d0b4d15f1570fa4a50a20af338d56c3" exitCode=0 Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.755873 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerDied","Data":"892c425b55d5562b5dd6102cc77f438e4d0b4d15f1570fa4a50a20af338d56c3"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.765017 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.765273 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.265238642 +0000 UTC m=+159.424439827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.765351 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.765524 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.765606 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.766289 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.266278523 +0000 UTC m=+159.425479648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.767524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.790643 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerStarted","Data":"3e0e22474587b7ed5c428439b541258f31b776abf0c59ec7c5be6b32fd96deb3"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.821499 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2st6h" event={"ID":"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c","Type":"ContainerStarted","Data":"cdfaffa6438d5292139067b0b705f0f9204985c3515f5e389d0d0910f46efc8a"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.826891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.827854 5121 generic.go:358] "Generic (PLEG): container finished" podID="77411de1-0221-4222-b0f1-33d1beba40ad" containerID="73f689c5302ac17378560ddf7812e8a4db6eb9f7b04c50485f05d026e126d15f" exitCode=0 Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.827919 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerDied","Data":"73f689c5302ac17378560ddf7812e8a4db6eb9f7b04c50485f05d026e126d15f"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.831919 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerStarted","Data":"e580034c525da8cf3be61fdcaa055df120a5d569f4a200ec289e2c7fdbd9004d"} Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.867363 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.868842 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.368819709 +0000 UTC m=+159.528020844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:07 crc kubenswrapper[5121]: I0126 00:12:07.969164 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:07 crc kubenswrapper[5121]: E0126 00:12:07.969565 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.46954309 +0000 UTC m=+159.628744275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.021795 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.070612 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.070845 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.570815378 +0000 UTC m=+159.730016503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.071009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.071415 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.571398495 +0000 UTC m=+159.730599620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.131250 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.172442 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.173905 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.673861689 +0000 UTC m=+159.833062814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.275317 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.275740 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.775723964 +0000 UTC m=+159.934925089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.329268 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:08 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:08 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:08 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.329371 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.379156 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.379996 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.879976851 +0000 UTC m=+160.039177976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.480696 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.481057 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:08.981043703 +0000 UTC m=+160.140244828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.486937 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.582084 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.582385 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.082321151 +0000 UTC m=+160.241522286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.582798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.583155 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.083140815 +0000 UTC m=+160.242341940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.683653 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.683930 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.183879447 +0000 UTC m=+160.343080572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.786097 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.786634 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.286611068 +0000 UTC m=+160.445812193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.847655 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerStarted","Data":"6388d81d091d68e756c451ee0e563243f0e9ada38d7b802d51e4981a69c65a6e"} Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.848637 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"03005723-7899-48d1-96f0-dec2c9563ccb","Type":"ContainerStarted","Data":"8a7fd2ee57f29a382e6edcabc54d260975052d5cce89b13bf4bb642db81fa9a0"} Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.849487 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerStarted","Data":"8e0ba689fca9fae08775016845af9af4b3d02fa15d8f9f1673b93c743148093b"} Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.850909 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerStarted","Data":"47e6865a15380527a75033d83c095eb4efa7c45f7a90c65e01129998eab73e12"} Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.888349 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.888517 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.388471984 +0000 UTC m=+160.547673109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:08 crc kubenswrapper[5121]: I0126 00:12:08.888902 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:08 crc kubenswrapper[5121]: E0126 00:12:08.889283 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.389275168 +0000 UTC m=+160.548476293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:08.989643 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:08.990068 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.49004552 +0000 UTC m=+160.649246645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.091691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.092037 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.592018719 +0000 UTC m=+160.751219844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.192679 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.192916 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.692885175 +0000 UTC m=+160.852086300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.193206 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.194421 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.69440445 +0000 UTC m=+160.853605575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.242592 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:09 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:09 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:09 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.242674 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.294253 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.294817 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.794795841 +0000 UTC m=+160.953996976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.294869 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.295182 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.795172123 +0000 UTC m=+160.954373248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.396329 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.396504 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.89647493 +0000 UTC m=+161.055676055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.396713 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.397086 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.897070858 +0000 UTC m=+161.056271983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.498502 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.498842 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:09.99882014 +0000 UTC m=+161.158021265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.600483 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.601079 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.101054867 +0000 UTC m=+161.260255992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.741426 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.741596 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.241572046 +0000 UTC m=+161.400773181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.741676 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.742072 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.24206099 +0000 UTC m=+161.401262115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.842502 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.842958 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.342939616 +0000 UTC m=+161.502140741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.878588 5121 generic.go:358] "Generic (PLEG): container finished" podID="3225226b-6f86-4163-b401-b9136c86dfed" containerID="c87f3dff942f3d615f2c61c40b29c39e40ea20bb9a0cb40f150dee26c4759ddc" exitCode=0 Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.878766 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerDied","Data":"c87f3dff942f3d615f2c61c40b29c39e40ea20bb9a0cb40f150dee26c4759ddc"} Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.882990 5121 generic.go:358] "Generic (PLEG): container finished" podID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerID="47e6865a15380527a75033d83c095eb4efa7c45f7a90c65e01129998eab73e12" exitCode=0 Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.883069 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerDied","Data":"47e6865a15380527a75033d83c095eb4efa7c45f7a90c65e01129998eab73e12"} Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.913168 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2st6h" event={"ID":"e2c23c20-cf98-42ae-b5fb-5bbde2b0740c","Type":"ContainerStarted","Data":"943006ff674967ef1100e350af30d49e00a3137dcf0aa29d0c9ec6ff2e8337c9"} Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.923541 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerID="87806c6f13a55b52d6912e779da4a98e0762c7fbf0dfb496f3311de82cdf4743" exitCode=0 Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.923611 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerDied","Data":"87806c6f13a55b52d6912e779da4a98e0762c7fbf0dfb496f3311de82cdf4743"} Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.943981 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:09 crc kubenswrapper[5121]: E0126 00:12:09.944361 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.444347248 +0000 UTC m=+161.603548373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:09 crc kubenswrapper[5121]: I0126 00:12:09.983949 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2st6h" podStartSLOduration=133.983912238 podStartE2EDuration="2m13.983912238s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:09.955247106 +0000 UTC m=+161.114448231" watchObservedRunningTime="2026-01-26 00:12:09.983912238 +0000 UTC m=+161.143113363" Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.045939 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.046743 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.546697998 +0000 UTC m=+161.705899133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.165062 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.165590 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.665571745 +0000 UTC m=+161.824772870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.251360 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:10 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:10 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:10 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.251441 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.265964 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.266164 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.766137942 +0000 UTC m=+161.925339067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.266355 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.266779 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.766770581 +0000 UTC m=+161.925971706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.367693 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.368377 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.868343407 +0000 UTC m=+162.027544532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.470880 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.471389 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:10.971365588 +0000 UTC m=+162.130566713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.571652 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.571798 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.071779489 +0000 UTC m=+162.230980614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.571939 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.572223 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.072216783 +0000 UTC m=+162.231417908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.577943 5121 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.608867 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-r5x7x" Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.673547 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.673779 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.173717807 +0000 UTC m=+162.332918932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.674366 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.674726 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.174709107 +0000 UTC m=+162.333910232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.775212 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.775971 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.275950194 +0000 UTC m=+162.435151319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.877156 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.877633 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.377611563 +0000 UTC m=+162.536812688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.923480 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-ldf8d" Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.937173 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" event={"ID":"197eb808-9411-4b4c-b882-85f9c3479dae","Type":"ContainerStarted","Data":"3fb5256cf4b5bc19f8543ba149e97fec18fc03ba4318ae3752ac92963836e960"} Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.939861 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerID="55fa921ee2d446f9c4eb888a8fe68467ec0c7a95f3028ca0a2e67910b974fe43" exitCode=0 Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.939973 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerDied","Data":"55fa921ee2d446f9c4eb888a8fe68467ec0c7a95f3028ca0a2e67910b974fe43"} Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.942491 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"03005723-7899-48d1-96f0-dec2c9563ccb","Type":"ContainerStarted","Data":"6ae8578cb847ee9577e54fcb8911a92284d5e84540ab6adc3fa6a4986f333a88"} Jan 26 00:12:10 crc kubenswrapper[5121]: I0126 00:12:10.981021 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:10 crc kubenswrapper[5121]: E0126 00:12:10.981514 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.481490669 +0000 UTC m=+162.640691794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.016541 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.016514623 podStartE2EDuration="4.016514623s" podCreationTimestamp="2026-01-26 00:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:11.010666647 +0000 UTC m=+162.169867772" watchObservedRunningTime="2026-01-26 00:12:11.016514623 +0000 UTC m=+162.175715748" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.083196 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.086799 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.586778938 +0000 UTC m=+162.745980063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.186148 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.186391 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.686348634 +0000 UTC m=+162.845549759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.186962 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.187450 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.687440987 +0000 UTC m=+162.846642112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.236552 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:11 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:11 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:11 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.236621 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.288660 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.288930 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.78890068 +0000 UTC m=+162.948101805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.289526 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.290403 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.790377975 +0000 UTC m=+162.949579100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.294294 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.390897 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.391906 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.891881429 +0000 UTC m=+163.051082564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.492925 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: E0126 00:12:11.493310 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:12:11.993293151 +0000 UTC m=+163.152494276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-c2pks" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.523239 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p5bxm" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.535058 5121 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T00:12:10.577960705Z","UUID":"b7157653-0499-4682-8f4b-ddf5893bcb86","Handler":null,"Name":"","Endpoint":""} Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.539398 5121 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.539676 5121 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.596191 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.617097 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.701909 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.727299 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.727350 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:11 crc kubenswrapper[5121]: I0126 00:12:11.765835 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-c2pks\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.004606 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.113238 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" event={"ID":"197eb808-9411-4b4c-b882-85f9c3479dae","Type":"ContainerStarted","Data":"bae25b5e6f8322afc0972cbef08e97bd876d4c98db5fa75f2c7137857a7bcf59"} Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.274737 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.320809 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:12 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:12 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:12 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.320880 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.548257 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.548587 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" containerID="cri-o://9bfe21660e4297076895acff14c1840cdb69d0f276a1b49d4cb27dd228e3d78c" gracePeriod=30 Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.579306 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.579612 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" containerID="cri-o://126ec523caf9ee3a46284a8a1d1891b443ea45b0b94ccf25c0554edf1e68a240" gracePeriod=30 Jan 26 00:12:12 crc kubenswrapper[5121]: I0126 00:12:12.838592 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.120430 5121 generic.go:358] "Generic (PLEG): container finished" podID="03005723-7899-48d1-96f0-dec2c9563ccb" containerID="6ae8578cb847ee9577e54fcb8911a92284d5e84540ab6adc3fa6a4986f333a88" exitCode=0 Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.120519 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"03005723-7899-48d1-96f0-dec2c9563ccb","Type":"ContainerDied","Data":"6ae8578cb847ee9577e54fcb8911a92284d5e84540ab6adc3fa6a4986f333a88"} Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.127995 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" event={"ID":"197eb808-9411-4b4c-b882-85f9c3479dae","Type":"ContainerStarted","Data":"97ae9157746fe7bcf30bf2e00a0544e254ec3862a20abbb9d92b1a4ab68a4c7b"} Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.129431 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" event={"ID":"377fc649-7ccb-4b5e-a98c-f217298fd396","Type":"ContainerStarted","Data":"85adef2e4935b11730e3b5850afd882b0498ede0f3a85363bbc8983828b06714"} Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.236624 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:13 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:13 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:13 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.236722 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.252205 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:13 crc kubenswrapper[5121]: I0126 00:12:13.252275 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.137061 5121 generic.go:358] "Generic (PLEG): container finished" podID="eac9c212-b298-468b-a465-d924254ae8ab" containerID="126ec523caf9ee3a46284a8a1d1891b443ea45b0b94ccf25c0554edf1e68a240" exitCode=0 Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.137152 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" event={"ID":"eac9c212-b298-468b-a465-d924254ae8ab","Type":"ContainerDied","Data":"126ec523caf9ee3a46284a8a1d1891b443ea45b0b94ccf25c0554edf1e68a240"} Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.236012 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:14 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:14 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:14 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.236377 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.427151 5121 ???:1] "http: TLS handshake error from 192.168.126.11:50892: no serving certificate available for the kubelet" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.629865 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.629946 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.714990 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.785534 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir\") pod \"03005723-7899-48d1-96f0-dec2c9563ccb\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.785652 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03005723-7899-48d1-96f0-dec2c9563ccb" (UID: "03005723-7899-48d1-96f0-dec2c9563ccb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.785664 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access\") pod \"03005723-7899-48d1-96f0-dec2c9563ccb\" (UID: \"03005723-7899-48d1-96f0-dec2c9563ccb\") " Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.785979 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03005723-7899-48d1-96f0-dec2c9563ccb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.796193 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03005723-7899-48d1-96f0-dec2c9563ccb" (UID: "03005723-7899-48d1-96f0-dec2c9563ccb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:14 crc kubenswrapper[5121]: I0126 00:12:14.887087 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03005723-7899-48d1-96f0-dec2c9563ccb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:15 crc kubenswrapper[5121]: I0126 00:12:15.143846 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 26 00:12:15 crc kubenswrapper[5121]: I0126 00:12:15.143880 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"03005723-7899-48d1-96f0-dec2c9563ccb","Type":"ContainerDied","Data":"8a7fd2ee57f29a382e6edcabc54d260975052d5cce89b13bf4bb642db81fa9a0"} Jan 26 00:12:15 crc kubenswrapper[5121]: I0126 00:12:15.143908 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7fd2ee57f29a382e6edcabc54d260975052d5cce89b13bf4bb642db81fa9a0" Jan 26 00:12:15 crc kubenswrapper[5121]: I0126 00:12:15.235580 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:15 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:15 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:15 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:15 crc kubenswrapper[5121]: I0126 00:12:15.235648 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:15 crc kubenswrapper[5121]: E0126 00:12:15.668843 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:15 crc kubenswrapper[5121]: E0126 00:12:15.670951 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:15 crc kubenswrapper[5121]: E0126 00:12:15.672543 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:15 crc kubenswrapper[5121]: E0126 00:12:15.672581 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:16 crc kubenswrapper[5121]: I0126 00:12:16.235331 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:16 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:16 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:16 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:16 crc kubenswrapper[5121]: I0126 00:12:16.235788 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:16 crc kubenswrapper[5121]: I0126 00:12:16.536258 5121 patch_prober.go:28] interesting pod/console-64d44f6ddf-g5dxr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 00:12:16 crc kubenswrapper[5121]: I0126 00:12:16.536322 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-g5dxr" podUID="85c879f7-5fe1-44b3-94ca-dd368a14be73" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 00:12:17 crc kubenswrapper[5121]: I0126 00:12:17.235555 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:17 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Jan 26 00:12:17 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:17 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:17 crc kubenswrapper[5121]: I0126 00:12:17.235667 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:17 crc kubenswrapper[5121]: I0126 00:12:17.471523 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:17 crc kubenswrapper[5121]: I0126 00:12:17.471694 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:17 crc kubenswrapper[5121]: I0126 00:12:17.471801 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:12:18 crc kubenswrapper[5121]: I0126 00:12:18.234620 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:18 crc kubenswrapper[5121]: [+]has-synced ok Jan 26 00:12:18 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:18 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:18 crc kubenswrapper[5121]: I0126 00:12:18.234710 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:18 crc kubenswrapper[5121]: I0126 00:12:18.267589 5121 generic.go:358] "Generic (PLEG): container finished" podID="78781662-c6e5-43f1-8914-a11c064230ca" containerID="9bfe21660e4297076895acff14c1840cdb69d0f276a1b49d4cb27dd228e3d78c" exitCode=0 Jan 26 00:12:18 crc kubenswrapper[5121]: I0126 00:12:18.267721 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" event={"ID":"78781662-c6e5-43f1-8914-a11c064230ca","Type":"ContainerDied","Data":"9bfe21660e4297076895acff14c1840cdb69d0f276a1b49d4cb27dd228e3d78c"} Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.234810 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bgksv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:12:19 crc kubenswrapper[5121]: [+]has-synced ok Jan 26 00:12:19 crc kubenswrapper[5121]: [+]process-running ok Jan 26 00:12:19 crc kubenswrapper[5121]: healthz check failed Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.235146 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" podUID="87353f19-deb2-41e6-bff6-3e2bb861ce33" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.742221 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67"} pod="openshift-console/downloads-747b44746d-jxx48" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.742307 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" containerID="cri-o://ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67" gracePeriod=2 Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.742619 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.742678 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:19 crc kubenswrapper[5121]: I0126 00:12:19.768905 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hs67g" podStartSLOduration=35.768884821 podStartE2EDuration="35.768884821s" podCreationTimestamp="2026-01-26 00:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:19.768458988 +0000 UTC m=+170.927660113" watchObservedRunningTime="2026-01-26 00:12:19.768884821 +0000 UTC m=+170.928085946" Jan 26 00:12:20 crc kubenswrapper[5121]: I0126 00:12:20.236249 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:12:20 crc kubenswrapper[5121]: I0126 00:12:20.239124 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-bgksv" Jan 26 00:12:22 crc kubenswrapper[5121]: I0126 00:12:22.024102 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 00:12:22 crc kubenswrapper[5121]: I0126 00:12:22.024499 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 00:12:25 crc kubenswrapper[5121]: I0126 00:12:25.629595 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:25 crc kubenswrapper[5121]: I0126 00:12:25.630257 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:25 crc kubenswrapper[5121]: E0126 00:12:25.665422 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:25 crc kubenswrapper[5121]: E0126 00:12:25.667147 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:25 crc kubenswrapper[5121]: E0126 00:12:25.668961 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:25 crc kubenswrapper[5121]: E0126 00:12:25.669002 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:26 crc kubenswrapper[5121]: I0126 00:12:26.311945 5121 generic.go:358] "Generic (PLEG): container finished" podID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerID="ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67" exitCode=0 Jan 26 00:12:26 crc kubenswrapper[5121]: I0126 00:12:26.312036 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerDied","Data":"ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67"} Jan 26 00:12:28 crc kubenswrapper[5121]: I0126 00:12:28.376639 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:12:28 crc kubenswrapper[5121]: I0126 00:12:28.383005 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-g5dxr" Jan 26 00:12:29 crc kubenswrapper[5121]: I0126 00:12:29.332369 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhklg_946bd7f5-92cd-435d-9ff8-72af506917be/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:29 crc kubenswrapper[5121]: I0126 00:12:29.332442 5121 generic.go:358] "Generic (PLEG): container finished" podID="946bd7f5-92cd-435d-9ff8-72af506917be" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" exitCode=137 Jan 26 00:12:29 crc kubenswrapper[5121]: I0126 00:12:29.332600 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" event={"ID":"946bd7f5-92cd-435d-9ff8-72af506917be","Type":"ContainerDied","Data":"6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1"} Jan 26 00:12:29 crc kubenswrapper[5121]: I0126 00:12:29.743456 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:29 crc kubenswrapper[5121]: I0126 00:12:29.743985 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:32 crc kubenswrapper[5121]: I0126 00:12:32.323658 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" Jan 26 00:12:33 crc kubenswrapper[5121]: I0126 00:12:33.022670 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded" start-of-body= Jan 26 00:12:33 crc kubenswrapper[5121]: I0126 00:12:33.022783 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": context deadline exceeded" Jan 26 00:12:34 crc kubenswrapper[5121]: I0126 00:12:34.941248 5121 ???:1] "http: TLS handshake error from 192.168.126.11:58260: no serving certificate available for the kubelet" Jan 26 00:12:35 crc kubenswrapper[5121]: I0126 00:12:35.629317 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:35 crc kubenswrapper[5121]: I0126 00:12:35.629430 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:35 crc kubenswrapper[5121]: E0126 00:12:35.665253 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:35 crc kubenswrapper[5121]: E0126 00:12:35.666108 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:35 crc kubenswrapper[5121]: E0126 00:12:35.666497 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:35 crc kubenswrapper[5121]: E0126 00:12:35.666544 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:38 crc kubenswrapper[5121]: I0126 00:12:38.386221 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 26 00:12:39 crc kubenswrapper[5121]: I0126 00:12:39.742968 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:39 crc kubenswrapper[5121]: I0126 00:12:39.743344 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.060120 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.060868 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03005723-7899-48d1-96f0-dec2c9563ccb" containerName="pruner" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.060885 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="03005723-7899-48d1-96f0-dec2c9563ccb" containerName="pruner" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.061208 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="03005723-7899-48d1-96f0-dec2c9563ccb" containerName="pruner" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.331053 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.339317 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.339592 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.340075 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.458363 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.458789 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.626507 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.626612 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.626744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.662384 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:42 crc kubenswrapper[5121]: I0126 00:12:42.947496 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:12:43 crc kubenswrapper[5121]: I0126 00:12:43.023239 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:43 crc kubenswrapper[5121]: I0126 00:12:43.023316 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:45 crc kubenswrapper[5121]: E0126 00:12:45.664780 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:45 crc kubenswrapper[5121]: E0126 00:12:45.665457 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:45 crc kubenswrapper[5121]: E0126 00:12:45.666155 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 26 00:12:45 crc kubenswrapper[5121]: E0126 00:12:45.666247 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 26 00:12:45 crc kubenswrapper[5121]: I0126 00:12:45.695662 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-tgcgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:45 crc kubenswrapper[5121]: I0126 00:12:45.695775 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.467505 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.475195 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.519219 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.537238 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.537326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.537430 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.639248 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.639832 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.641668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.641833 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.640099 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.665446 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access\") pod \"installer-12-crc\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:47 crc kubenswrapper[5121]: I0126 00:12:47.792127 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:12:49 crc kubenswrapper[5121]: I0126 00:12:49.743995 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:49 crc kubenswrapper[5121]: I0126 00:12:49.744602 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.271057 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.278698 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.305810 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307276 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307294 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307311 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307316 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307437 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="78781662-c6e5-43f1-8914-a11c064230ca" containerName="controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.307452 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.317848 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.328422 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.340500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert\") pod \"eac9c212-b298-468b-a465-d924254ae8ab\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.340652 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp\") pod \"eac9c212-b298-468b-a465-d924254ae8ab\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.340685 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config\") pod \"eac9c212-b298-468b-a465-d924254ae8ab\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.340817 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.340966 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcdh9\" (UniqueName: \"kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9\") pod \"eac9c212-b298-468b-a465-d924254ae8ab\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.341135 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.341173 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.341497 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkr8g\" (UniqueName: \"kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.341658 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.341961 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca\") pod \"78781662-c6e5-43f1-8914-a11c064230ca\" (UID: \"78781662-c6e5-43f1-8914-a11c064230ca\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.342006 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config" (OuterVolumeSpecName: "config") pod "eac9c212-b298-468b-a465-d924254ae8ab" (UID: "eac9c212-b298-468b-a465-d924254ae8ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.342251 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca\") pod \"eac9c212-b298-468b-a465-d924254ae8ab\" (UID: \"eac9c212-b298-468b-a465-d924254ae8ab\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.342707 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.342967 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.343279 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.343312 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrsr\" (UniqueName: \"kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.343419 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.344118 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.345824 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.347106 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp" (OuterVolumeSpecName: "tmp") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.349871 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "eac9c212-b298-468b-a465-d924254ae8ab" (UID: "eac9c212-b298-468b-a465-d924254ae8ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.350683 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp" (OuterVolumeSpecName: "tmp") pod "eac9c212-b298-468b-a465-d924254ae8ab" (UID: "eac9c212-b298-468b-a465-d924254ae8ab"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.350579 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.351360 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config" (OuterVolumeSpecName: "config") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.351675 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9" (OuterVolumeSpecName: "kube-api-access-lcdh9") pod "eac9c212-b298-468b-a465-d924254ae8ab" (UID: "eac9c212-b298-468b-a465-d924254ae8ab"). InnerVolumeSpecName "kube-api-access-lcdh9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.352320 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-759d785f59-zxh49"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.355932 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eac9c212-b298-468b-a465-d924254ae8ab" (UID: "eac9c212-b298-468b-a465-d924254ae8ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.367006 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759d785f59-zxh49"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.367164 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.372947 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g" (OuterVolumeSpecName: "kube-api-access-vkr8g") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "kube-api-access-vkr8g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.373918 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78781662-c6e5-43f1-8914-a11c064230ca" (UID: "78781662-c6e5-43f1-8914-a11c064230ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446000 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446133 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446160 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phrsr\" (UniqueName: \"kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446226 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446260 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqsxd\" (UniqueName: \"kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446329 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446368 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446421 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446458 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446496 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446571 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446589 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78781662-c6e5-43f1-8914-a11c064230ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446608 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkr8g\" (UniqueName: \"kubernetes.io/projected/78781662-c6e5-43f1-8914-a11c064230ca-kube-api-access-vkr8g\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446627 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446639 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78781662-c6e5-43f1-8914-a11c064230ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446656 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac9c212-b298-468b-a465-d924254ae8ab-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446668 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac9c212-b298-468b-a465-d924254ae8ab-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446683 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac9c212-b298-468b-a465-d924254ae8ab-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446700 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78781662-c6e5-43f1-8914-a11c064230ca-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.446712 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcdh9\" (UniqueName: \"kubernetes.io/projected/eac9c212-b298-468b-a465-d924254ae8ab-kube-api-access-lcdh9\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.448801 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.455526 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.458448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.458739 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.471940 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phrsr\" (UniqueName: \"kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr\") pod \"route-controller-manager-5d466c5775-s9khz\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.479847 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhklg_946bd7f5-92cd-435d-9ff8-72af506917be/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.479973 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.500134 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dhklg_946bd7f5-92cd-435d-9ff8-72af506917be/kube-multus-additional-cni-plugins/0.log" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.500718 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.501257 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dhklg" event={"ID":"946bd7f5-92cd-435d-9ff8-72af506917be","Type":"ContainerDied","Data":"72b3fd11557c617b301ee09bd29315ddbfd873bc6144cf3e6744267484a5af55"} Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.501415 5121 scope.go:117] "RemoveContainer" containerID="6650e17f2975847d99fe0e2c1b867e274e9b0fcd6d3ba33bda5a778a4c5b7cc1" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.513821 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.513830 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-tgcgk" event={"ID":"78781662-c6e5-43f1-8914-a11c064230ca","Type":"ContainerDied","Data":"bcb77203d108e201aca995c27f2fa076e1fc0aa8634bb7af52c83ff4c3755790"} Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.518372 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" event={"ID":"eac9c212-b298-468b-a465-d924254ae8ab","Type":"ContainerDied","Data":"1e403c59a1e2df36be4af5cfdf22f25a04dca6b1b904d7949058bafaf42eda04"} Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.518521 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.555984 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrsq\" (UniqueName: \"kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq\") pod \"946bd7f5-92cd-435d-9ff8-72af506917be\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.556666 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir\") pod \"946bd7f5-92cd-435d-9ff8-72af506917be\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.557297 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist\") pod \"946bd7f5-92cd-435d-9ff8-72af506917be\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.557390 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready\") pod \"946bd7f5-92cd-435d-9ff8-72af506917be\" (UID: \"946bd7f5-92cd-435d-9ff8-72af506917be\") " Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.559656 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "946bd7f5-92cd-435d-9ff8-72af506917be" (UID: "946bd7f5-92cd-435d-9ff8-72af506917be"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.560690 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.560852 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.560932 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.561008 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.561068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.561169 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqsxd\" (UniqueName: \"kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.562469 5121 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/946bd7f5-92cd-435d-9ff8-72af506917be-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.563313 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready" (OuterVolumeSpecName: "ready") pod "946bd7f5-92cd-435d-9ff8-72af506917be" (UID: "946bd7f5-92cd-435d-9ff8-72af506917be"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.563413 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.564799 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.565605 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "946bd7f5-92cd-435d-9ff8-72af506917be" (UID: "946bd7f5-92cd-435d-9ff8-72af506917be"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.565805 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.567645 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.575974 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.578260 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq" (OuterVolumeSpecName: "kube-api-access-rrrsq") pod "946bd7f5-92cd-435d-9ff8-72af506917be" (UID: "946bd7f5-92cd-435d-9ff8-72af506917be"). InnerVolumeSpecName "kube-api-access-rrrsq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.586964 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqsxd\" (UniqueName: \"kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.587996 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-tgcgk"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.594127 5121 scope.go:117] "RemoveContainer" containerID="9bfe21660e4297076895acff14c1840cdb69d0f276a1b49d4cb27dd228e3d78c" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.599703 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.605825 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.606176 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert\") pod \"controller-manager-759d785f59-zxh49\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.655397 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.667284 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/946bd7f5-92cd-435d-9ff8-72af506917be-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.668408 5121 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/946bd7f5-92cd-435d-9ff8-72af506917be-ready\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.669598 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrrsq\" (UniqueName: \"kubernetes.io/projected/946bd7f5-92cd-435d-9ff8-72af506917be-kube-api-access-rrrsq\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.676977 5121 scope.go:117] "RemoveContainer" containerID="126ec523caf9ee3a46284a8a1d1891b443ea45b0b94ccf25c0554edf1e68a240" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.716100 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.889741 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.897031 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhklg"] Jan 26 00:12:52 crc kubenswrapper[5121]: I0126 00:12:52.900052 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dhklg"] Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.024418 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rqsvg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.024481 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rqsvg" podUID="eac9c212-b298-468b-a465-d924254ae8ab" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.066418 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.151689 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz"] Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.607611 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759d785f59-zxh49"] Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.710746 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerStarted","Data":"a2bb7aa9fb375c1a28160fd94ddcd56bbb6dca83c9a3ea4f3648a5d7ecf90b93"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.715313 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerStarted","Data":"bca443ca007878020ad10ee6523bc5b982f0b94e3f17bf9630034efb5c9887da"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.718502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerStarted","Data":"8ea8c223a4290c8ed503e17f32140c615edd09215fbf77dc505f882c58fe44fd"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.721284 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" event={"ID":"377fc649-7ccb-4b5e-a98c-f217298fd396","Type":"ContainerStarted","Data":"58139bbe20f1b373058e8f021f501637c2f0d2265da0c468bea4685257742841"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.722266 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.725744 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerStarted","Data":"26e19b72560a70beaba0f39f007e8c877c27dc01f793102302ace9006110c35d"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.727696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"52a8e246-af61-41fb-9732-b7e4e2777d4e","Type":"ContainerStarted","Data":"04b661ae4fc8269c70a5e857f14aa894db6855f4fe2d9c7a684a62c4716cbec8"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.728969 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d5f4c25e-df23-4d49-843a-918cbb36df1c","Type":"ContainerStarted","Data":"4b2dec64c65a113968bb35b4b6e71bd64a7060a1376d48488bdd250ba579ef13"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.730750 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerStarted","Data":"62ffa08ee07006ff80042c11b1a509cbab2c996427669ba66004990bd900622e"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.731675 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.731744 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.731800 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.738143 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" event={"ID":"7aeaa242-0f5c-4494-b383-0d78f9d74243","Type":"ContainerStarted","Data":"e7ab71fbc984f9cc6a5a0c4aa856d3c4c2c7c2e17dd386feaa679fe69247cebb"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.740689 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerStarted","Data":"1291f248784dc600cceebf2a86f552b5cb5dafcc17fe34493f1bc49afa0f3e9b"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.743915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerStarted","Data":"28edef9c6f363addfa4e733c5ad8f047e7aef75f1c91570c5e55bdeb24251c19"} Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.746176 5121 generic.go:358] "Generic (PLEG): container finished" podID="3225226b-6f86-4163-b401-b9136c86dfed" containerID="54d2d87588d6bbe3dbfb327089ce9d4864fa7a3aba978ba6c260573368e5f588" exitCode=0 Jan 26 00:12:53 crc kubenswrapper[5121]: I0126 00:12:53.746222 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerDied","Data":"54d2d87588d6bbe3dbfb327089ce9d4864fa7a3aba978ba6c260573368e5f588"} Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.366958 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78781662-c6e5-43f1-8914-a11c064230ca" path="/var/lib/kubelet/pods/78781662-c6e5-43f1-8914-a11c064230ca/volumes" Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.368337 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" podStartSLOduration=178.368320048 podStartE2EDuration="2m58.368320048s" podCreationTimestamp="2026-01-26 00:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:54.365289266 +0000 UTC m=+205.524490391" watchObservedRunningTime="2026-01-26 00:12:54.368320048 +0000 UTC m=+205.527521173" Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.368862 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" path="/var/lib/kubelet/pods/946bd7f5-92cd-435d-9ff8-72af506917be/volumes" Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.372190 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac9c212-b298-468b-a465-d924254ae8ab" path="/var/lib/kubelet/pods/eac9c212-b298-468b-a465-d924254ae8ab/volumes" Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.781476 5121 generic.go:358] "Generic (PLEG): container finished" podID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerID="a2bb7aa9fb375c1a28160fd94ddcd56bbb6dca83c9a3ea4f3648a5d7ecf90b93" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.781540 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerDied","Data":"a2bb7aa9fb375c1a28160fd94ddcd56bbb6dca83c9a3ea4f3648a5d7ecf90b93"} Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.792946 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerID="bca443ca007878020ad10ee6523bc5b982f0b94e3f17bf9630034efb5c9887da" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.793050 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerDied","Data":"bca443ca007878020ad10ee6523bc5b982f0b94e3f17bf9630034efb5c9887da"} Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.801728 5121 generic.go:358] "Generic (PLEG): container finished" podID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerID="26e19b72560a70beaba0f39f007e8c877c27dc01f793102302ace9006110c35d" exitCode=0 Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.801874 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerDied","Data":"26e19b72560a70beaba0f39f007e8c877c27dc01f793102302ace9006110c35d"} Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.805669 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" event={"ID":"11ca6370-efa7-43a5-ba4d-871d77330707","Type":"ContainerStarted","Data":"cd8451aaa5b0b8eeda2735957d14bc3a1b0326f4400db3325e26007edab6cbc9"} Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.931906 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:54 crc kubenswrapper[5121]: I0126 00:12:54.932038 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.887977 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerStarted","Data":"c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910"} Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.889327 5121 generic.go:358] "Generic (PLEG): container finished" podID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerID="8ea8c223a4290c8ed503e17f32140c615edd09215fbf77dc505f882c58fe44fd" exitCode=0 Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.889405 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerDied","Data":"8ea8c223a4290c8ed503e17f32140c615edd09215fbf77dc505f882c58fe44fd"} Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.892202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerStarted","Data":"2e9b043c8669e50b2e80ae8032a2649b97b637970f007e2475d070f85195ee9d"} Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.893185 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.893224 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:55 crc kubenswrapper[5121]: I0126 00:12:55.945385 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m7rfv" podStartSLOduration=8.294793841 podStartE2EDuration="50.945363124s" podCreationTimestamp="2026-01-26 00:12:05 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.879778675 +0000 UTC m=+161.038979810" lastFinishedPulling="2026-01-26 00:12:52.530347978 +0000 UTC m=+203.689549093" observedRunningTime="2026-01-26 00:12:55.919410064 +0000 UTC m=+207.078611189" watchObservedRunningTime="2026-01-26 00:12:55.945363124 +0000 UTC m=+207.104564249" Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.942605 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerStarted","Data":"13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.945362 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerStarted","Data":"7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.948367 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerStarted","Data":"6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.952856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerStarted","Data":"e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.955547 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"52a8e246-af61-41fb-9732-b7e4e2777d4e","Type":"ContainerStarted","Data":"b76b05b49ce7b6b8cfa46b62d8487810847ff57f84c2be8121e65bdbbc5cec6a"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.957049 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" event={"ID":"11ca6370-efa7-43a5-ba4d-871d77330707","Type":"ContainerStarted","Data":"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.958275 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d5f4c25e-df23-4d49-843a-918cbb36df1c","Type":"ContainerStarted","Data":"bbb69b4c5900b333f0baea99394226a13f53019e8ec8710e9ad76016b28e2818"} Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.960682 5121 generic.go:358] "Generic (PLEG): container finished" podID="77411de1-0221-4222-b0f1-33d1beba40ad" containerID="1291f248784dc600cceebf2a86f552b5cb5dafcc17fe34493f1bc49afa0f3e9b" exitCode=0 Jan 26 00:12:56 crc kubenswrapper[5121]: I0126 00:12:56.961476 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerDied","Data":"1291f248784dc600cceebf2a86f552b5cb5dafcc17fe34493f1bc49afa0f3e9b"} Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.020060 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=10.020034887 podStartE2EDuration="10.020034887s" podCreationTimestamp="2026-01-26 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:56.983072223 +0000 UTC m=+208.142273368" watchObservedRunningTime="2026-01-26 00:12:57.020034887 +0000 UTC m=+208.179236012" Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.471419 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.471846 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.968827 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" event={"ID":"7aeaa242-0f5c-4494-b383-0d78f9d74243","Type":"ContainerStarted","Data":"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee"} Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.971285 5121 patch_prober.go:28] interesting pod/route-controller-manager-5d466c5775-s9khz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.971342 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.971371 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.971417 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:57 crc kubenswrapper[5121]: I0126 00:12:57.983700 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.014485 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" podStartSLOduration=25.0144579 podStartE2EDuration="25.0144579s" podCreationTimestamp="2026-01-26 00:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:57.997527025 +0000 UTC m=+209.156728180" watchObservedRunningTime="2026-01-26 00:12:58.0144579 +0000 UTC m=+209.173659025" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.032797 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dmjdc" podStartSLOduration=10.217376127 podStartE2EDuration="55.032778877s" podCreationTimestamp="2026-01-26 00:12:03 +0000 UTC" firstStartedPulling="2026-01-26 00:12:07.714538346 +0000 UTC m=+158.873739471" lastFinishedPulling="2026-01-26 00:12:52.529941096 +0000 UTC m=+203.689142221" observedRunningTime="2026-01-26 00:12:58.030403895 +0000 UTC m=+209.189605030" watchObservedRunningTime="2026-01-26 00:12:58.032778877 +0000 UTC m=+209.191979992" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.063420 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4p4cc" podStartSLOduration=11.289401284 podStartE2EDuration="56.063396268s" podCreationTimestamp="2026-01-26 00:12:02 +0000 UTC" firstStartedPulling="2026-01-26 00:12:07.756735136 +0000 UTC m=+158.915936261" lastFinishedPulling="2026-01-26 00:12:52.53073012 +0000 UTC m=+203.689931245" observedRunningTime="2026-01-26 00:12:58.060105138 +0000 UTC m=+209.219306253" watchObservedRunningTime="2026-01-26 00:12:58.063396268 +0000 UTC m=+209.222597393" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.081520 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" podStartSLOduration=25.081497658 podStartE2EDuration="25.081497658s" podCreationTimestamp="2026-01-26 00:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:58.081178159 +0000 UTC m=+209.240379284" watchObservedRunningTime="2026-01-26 00:12:58.081497658 +0000 UTC m=+209.240698793" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.115433 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfhxk" podStartSLOduration=10.318453023 podStartE2EDuration="55.11541794s" podCreationTimestamp="2026-01-26 00:12:03 +0000 UTC" firstStartedPulling="2026-01-26 00:12:07.675153761 +0000 UTC m=+158.834354886" lastFinishedPulling="2026-01-26 00:12:52.472118678 +0000 UTC m=+203.631319803" observedRunningTime="2026-01-26 00:12:58.114355958 +0000 UTC m=+209.273557093" watchObservedRunningTime="2026-01-26 00:12:58.11541794 +0000 UTC m=+209.274619055" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.129238 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=16.129223259 podStartE2EDuration="16.129223259s" podCreationTimestamp="2026-01-26 00:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:58.126737354 +0000 UTC m=+209.285938489" watchObservedRunningTime="2026-01-26 00:12:58.129223259 +0000 UTC m=+209.288424384" Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.977947 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerStarted","Data":"41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268"} Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.979587 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerID="28edef9c6f363addfa4e733c5ad8f047e7aef75f1c91570c5e55bdeb24251c19" exitCode=0 Jan 26 00:12:58 crc kubenswrapper[5121]: I0126 00:12:58.983551 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerDied","Data":"28edef9c6f363addfa4e733c5ad8f047e7aef75f1c91570c5e55bdeb24251c19"} Jan 26 00:12:59 crc kubenswrapper[5121]: I0126 00:12:59.236722 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zdxmp" podStartSLOduration=11.602779558 podStartE2EDuration="54.23670427s" podCreationTimestamp="2026-01-26 00:12:05 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.896320503 +0000 UTC m=+161.055521638" lastFinishedPulling="2026-01-26 00:12:52.530245235 +0000 UTC m=+203.689446350" observedRunningTime="2026-01-26 00:12:58.199365892 +0000 UTC m=+209.358567017" watchObservedRunningTime="2026-01-26 00:12:59.23670427 +0000 UTC m=+210.395905395" Jan 26 00:12:59 crc kubenswrapper[5121]: I0126 00:12:59.323271 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:13:00 crc kubenswrapper[5121]: I0126 00:13:00.016856 5121 generic.go:358] "Generic (PLEG): container finished" podID="52a8e246-af61-41fb-9732-b7e4e2777d4e" containerID="b76b05b49ce7b6b8cfa46b62d8487810847ff57f84c2be8121e65bdbbc5cec6a" exitCode=0 Jan 26 00:13:00 crc kubenswrapper[5121]: I0126 00:13:00.016946 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"52a8e246-af61-41fb-9732-b7e4e2777d4e","Type":"ContainerDied","Data":"b76b05b49ce7b6b8cfa46b62d8487810847ff57f84c2be8121e65bdbbc5cec6a"} Jan 26 00:13:00 crc kubenswrapper[5121]: I0126 00:13:00.048860 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v8cdp" podStartSLOduration=12.272309137 podStartE2EDuration="57.04882206s" podCreationTimestamp="2026-01-26 00:12:03 +0000 UTC" firstStartedPulling="2026-01-26 00:12:07.828727262 +0000 UTC m=+158.987928387" lastFinishedPulling="2026-01-26 00:12:52.605240185 +0000 UTC m=+203.764441310" observedRunningTime="2026-01-26 00:13:00.044800707 +0000 UTC m=+211.204001842" watchObservedRunningTime="2026-01-26 00:13:00.04882206 +0000 UTC m=+211.208023185" Jan 26 00:13:01 crc kubenswrapper[5121]: I0126 00:13:01.958500 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.032017 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"52a8e246-af61-41fb-9732-b7e4e2777d4e","Type":"ContainerDied","Data":"04b661ae4fc8269c70a5e857f14aa894db6855f4fe2d9c7a684a62c4716cbec8"} Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.032061 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b661ae4fc8269c70a5e857f14aa894db6855f4fe2d9c7a684a62c4716cbec8" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.032154 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.033899 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerID="2e9b043c8669e50b2e80ae8032a2649b97b637970f007e2475d070f85195ee9d" exitCode=0 Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.033972 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerDied","Data":"2e9b043c8669e50b2e80ae8032a2649b97b637970f007e2475d070f85195ee9d"} Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.037575 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerStarted","Data":"1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c"} Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.048904 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir\") pod \"52a8e246-af61-41fb-9732-b7e4e2777d4e\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.049098 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access\") pod \"52a8e246-af61-41fb-9732-b7e4e2777d4e\" (UID: \"52a8e246-af61-41fb-9732-b7e4e2777d4e\") " Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.049099 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "52a8e246-af61-41fb-9732-b7e4e2777d4e" (UID: "52a8e246-af61-41fb-9732-b7e4e2777d4e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.049328 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52a8e246-af61-41fb-9732-b7e4e2777d4e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.166900 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "52a8e246-af61-41fb-9732-b7e4e2777d4e" (UID: "52a8e246-af61-41fb-9732-b7e4e2777d4e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:02 crc kubenswrapper[5121]: I0126 00:13:02.267512 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52a8e246-af61-41fb-9732-b7e4e2777d4e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:03 crc kubenswrapper[5121]: I0126 00:13:03.713229 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hrfn9" podStartSLOduration=15.106638056 podStartE2EDuration="57.713207946s" podCreationTimestamp="2026-01-26 00:12:06 +0000 UTC" firstStartedPulling="2026-01-26 00:12:09.924524111 +0000 UTC m=+161.083725236" lastFinishedPulling="2026-01-26 00:12:52.531094001 +0000 UTC m=+203.690295126" observedRunningTime="2026-01-26 00:13:03.700440518 +0000 UTC m=+214.859641643" watchObservedRunningTime="2026-01-26 00:13:03.713207946 +0000 UTC m=+214.872409071" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.676250 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.677293 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.682990 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.683638 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.742437 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.742488 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.743001 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:04 crc kubenswrapper[5121]: I0126 00:13:04.743033 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:13:05 crc kubenswrapper[5121]: I0126 00:13:05.678914 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:13:05 crc kubenswrapper[5121]: I0126 00:13:05.678967 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:13:05 crc kubenswrapper[5121]: I0126 00:13:05.894113 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:05 crc kubenswrapper[5121]: I0126 00:13:05.894185 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.000504 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.000610 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.504786 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-6ztm9"] Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.790338 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.791573 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.797050 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.798322 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.800245 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.805613 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:13:06 crc kubenswrapper[5121]: I0126 00:13:06.882082 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.003061 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.016710 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.028679 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.046702 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.047063 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.165042 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.165122 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.475353 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:07 crc kubenswrapper[5121]: I0126 00:13:07.475444 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:08 crc kubenswrapper[5121]: I0126 00:13:08.237471 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hrfn9" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" probeResult="failure" output=< Jan 26 00:13:08 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Jan 26 00:13:08 crc kubenswrapper[5121]: > Jan 26 00:13:08 crc kubenswrapper[5121]: I0126 00:13:08.525979 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:13:08 crc kubenswrapper[5121]: I0126 00:13:08.527069 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v8cdp" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="registry-server" containerID="cri-o://41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" gracePeriod=2 Jan 26 00:13:08 crc kubenswrapper[5121]: I0126 00:13:08.724535 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:13:08 crc kubenswrapper[5121]: I0126 00:13:08.725941 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zdxmp" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="registry-server" containerID="cri-o://e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" gracePeriod=2 Jan 26 00:13:09 crc kubenswrapper[5121]: I0126 00:13:09.468058 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerStarted","Data":"ee2da49ed26657a93e6cb05494528f85ac3f75954ec9aecf8453d70f7e76ae38"} Jan 26 00:13:11 crc kubenswrapper[5121]: I0126 00:13:11.124922 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:13:11 crc kubenswrapper[5121]: I0126 00:13:11.127812 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dmjdc" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="registry-server" containerID="cri-o://7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" gracePeriod=2 Jan 26 00:13:11 crc kubenswrapper[5121]: I0126 00:13:11.512185 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-88gft" podStartSLOduration=23.922903469 podStartE2EDuration="1m5.512161324s" podCreationTimestamp="2026-01-26 00:12:06 +0000 UTC" firstStartedPulling="2026-01-26 00:12:10.941721022 +0000 UTC m=+162.100922147" lastFinishedPulling="2026-01-26 00:12:52.530978877 +0000 UTC m=+203.690180002" observedRunningTime="2026-01-26 00:13:11.509168353 +0000 UTC m=+222.668369478" watchObservedRunningTime="2026-01-26 00:13:11.512161324 +0000 UTC m=+222.671362449" Jan 26 00:13:15 crc kubenswrapper[5121]: I0126 00:13:15.893845 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:15 crc kubenswrapper[5121]: I0126 00:13:15.894482 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:15 crc kubenswrapper[5121]: I0126 00:13:15.899582 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:13:15 crc kubenswrapper[5121]: I0126 00:13:15.971222 5121 ???:1] "http: TLS handshake error from 192.168.126.11:48582: no serving certificate available for the kubelet" Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.792288 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268 is running failed: container process not found" containerID="41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.792621 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268 is running failed: container process not found" containerID="41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.793105 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268 is running failed: container process not found" containerID="41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.793141 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-v8cdp" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="registry-server" probeResult="unknown" Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802132 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc is running failed: container process not found" containerID="7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802210 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07 is running failed: container process not found" containerID="e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802342 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc is running failed: container process not found" containerID="7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802420 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07 is running failed: container process not found" containerID="e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802504 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc is running failed: container process not found" containerID="7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802536 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-dmjdc" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="registry-server" probeResult="unknown" Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802744 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07 is running failed: container process not found" containerID="e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:16 crc kubenswrapper[5121]: E0126 00:13:16.802794 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-zdxmp" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="registry-server" probeResult="unknown" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.171897 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.219937 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.358633 5121 generic.go:358] "Generic (PLEG): container finished" podID="77411de1-0221-4222-b0f1-33d1beba40ad" containerID="41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" exitCode=0 Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.358709 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerDied","Data":"41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268"} Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.361208 5121 generic.go:358] "Generic (PLEG): container finished" podID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerID="e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" exitCode=0 Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.361256 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerDied","Data":"e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07"} Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.470953 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471017 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471064 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471526 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"62ffa08ee07006ff80042c11b1a509cbab2c996427669ba66004990bd900622e"} pod="openshift-console/downloads-747b44746d-jxx48" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471580 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" containerID="cri-o://62ffa08ee07006ff80042c11b1a509cbab2c996427669ba66004990bd900622e" gracePeriod=2 Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471731 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.471792 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.506231 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.506280 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:17 crc kubenswrapper[5121]: I0126 00:13:17.548395 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.486728 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.603060 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities\") pod \"42a04527-f4f6-4570-8b32-08c2e4515c41\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.603133 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content\") pod \"42a04527-f4f6-4570-8b32-08c2e4515c41\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.603190 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kv6v\" (UniqueName: \"kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v\") pod \"42a04527-f4f6-4570-8b32-08c2e4515c41\" (UID: \"42a04527-f4f6-4570-8b32-08c2e4515c41\") " Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.604424 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities" (OuterVolumeSpecName: "utilities") pod "42a04527-f4f6-4570-8b32-08c2e4515c41" (UID: "42a04527-f4f6-4570-8b32-08c2e4515c41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.609032 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v" (OuterVolumeSpecName: "kube-api-access-8kv6v") pod "42a04527-f4f6-4570-8b32-08c2e4515c41" (UID: "42a04527-f4f6-4570-8b32-08c2e4515c41"). InnerVolumeSpecName "kube-api-access-8kv6v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.623463 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42a04527-f4f6-4570-8b32-08c2e4515c41" (UID: "42a04527-f4f6-4570-8b32-08c2e4515c41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.704542 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.704578 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a04527-f4f6-4570-8b32-08c2e4515c41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.704590 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8kv6v\" (UniqueName: \"kubernetes.io/projected/42a04527-f4f6-4570-8b32-08c2e4515c41-kube-api-access-8kv6v\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.882133 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerID="7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" exitCode=0 Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.882217 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerDied","Data":"7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc"} Jan 26 00:13:18 crc kubenswrapper[5121]: I0126 00:13:18.921691 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.733046 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.755522 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.822368 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content\") pod \"77411de1-0221-4222-b0f1-33d1beba40ad\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.822443 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities\") pod \"77411de1-0221-4222-b0f1-33d1beba40ad\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.822526 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rkv\" (UniqueName: \"kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv\") pod \"77411de1-0221-4222-b0f1-33d1beba40ad\" (UID: \"77411de1-0221-4222-b0f1-33d1beba40ad\") " Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.824185 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities" (OuterVolumeSpecName: "utilities") pod "77411de1-0221-4222-b0f1-33d1beba40ad" (UID: "77411de1-0221-4222-b0f1-33d1beba40ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.829482 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv" (OuterVolumeSpecName: "kube-api-access-88rkv") pod "77411de1-0221-4222-b0f1-33d1beba40ad" (UID: "77411de1-0221-4222-b0f1-33d1beba40ad"). InnerVolumeSpecName "kube-api-access-88rkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.891134 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8cdp" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.891135 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8cdp" event={"ID":"77411de1-0221-4222-b0f1-33d1beba40ad","Type":"ContainerDied","Data":"1a9b8862bf8f59c9acac8ddbdfe77fc5415956c86e7e08c47963771791fe58e5"} Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.891323 5121 scope.go:117] "RemoveContainer" containerID="41ac6c7e680197b4dee68c42048c2e4fd234ee6679e3f45443e60dc0721b1268" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.895202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjdc" event={"ID":"1a5d0fd1-d832-4686-905e-ccafef0fd5cd","Type":"ContainerDied","Data":"f60f9098808efcb7c2b7cd69f9c923e55be6d362ba8dea09c19cee7e68492623"} Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.895248 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f60f9098808efcb7c2b7cd69f9c923e55be6d362ba8dea09c19cee7e68492623" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.898268 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdxmp" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.898972 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdxmp" event={"ID":"42a04527-f4f6-4570-8b32-08c2e4515c41","Type":"ContainerDied","Data":"3e0e22474587b7ed5c428439b541258f31b776abf0c59ec7c5be6b32fd96deb3"} Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.909913 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.923680 5121 scope.go:117] "RemoveContainer" containerID="1291f248784dc600cceebf2a86f552b5cb5dafcc17fe34493f1bc49afa0f3e9b" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.927218 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.927252 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-88rkv\" (UniqueName: \"kubernetes.io/projected/77411de1-0221-4222-b0f1-33d1beba40ad-kube-api-access-88rkv\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.951727 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.955321 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdxmp"] Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.965329 5121 scope.go:117] "RemoveContainer" containerID="73f689c5302ac17378560ddf7812e8a4db6eb9f7b04c50485f05d026e126d15f" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.979311 5121 scope.go:117] "RemoveContainer" containerID="e97cf219ed28b4a2752898fd745bc7c3f945799143cb02c4dfe8b19bbe77ee07" Jan 26 00:13:19 crc kubenswrapper[5121]: I0126 00:13:19.991909 5121 scope.go:117] "RemoveContainer" containerID="26e19b72560a70beaba0f39f007e8c877c27dc01f793102302ace9006110c35d" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.005108 5121 scope.go:117] "RemoveContainer" containerID="47e6865a15380527a75033d83c095eb4efa7c45f7a90c65e01129998eab73e12" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.028431 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content\") pod \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.028624 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities\") pod \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.028743 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mfs\" (UniqueName: \"kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs\") pod \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\" (UID: \"1a5d0fd1-d832-4686-905e-ccafef0fd5cd\") " Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.029685 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities" (OuterVolumeSpecName: "utilities") pod "1a5d0fd1-d832-4686-905e-ccafef0fd5cd" (UID: "1a5d0fd1-d832-4686-905e-ccafef0fd5cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.032663 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs" (OuterVolumeSpecName: "kube-api-access-52mfs") pod "1a5d0fd1-d832-4686-905e-ccafef0fd5cd" (UID: "1a5d0fd1-d832-4686-905e-ccafef0fd5cd"). InnerVolumeSpecName "kube-api-access-52mfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.057120 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a5d0fd1-d832-4686-905e-ccafef0fd5cd" (UID: "1a5d0fd1-d832-4686-905e-ccafef0fd5cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.127624 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77411de1-0221-4222-b0f1-33d1beba40ad" (UID: "77411de1-0221-4222-b0f1-33d1beba40ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.131307 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.131934 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77411de1-0221-4222-b0f1-33d1beba40ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.131953 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.132037 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-52mfs\" (UniqueName: \"kubernetes.io/projected/1a5d0fd1-d832-4686-905e-ccafef0fd5cd-kube-api-access-52mfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.227971 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.230645 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v8cdp"] Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.262832 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" path="/var/lib/kubelet/pods/42a04527-f4f6-4570-8b32-08c2e4515c41/volumes" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.263425 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" path="/var/lib/kubelet/pods/77411de1-0221-4222-b0f1-33d1beba40ad/volumes" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.906433 5121 generic.go:358] "Generic (PLEG): container finished" podID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerID="62ffa08ee07006ff80042c11b1a509cbab2c996427669ba66004990bd900622e" exitCode=0 Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.906588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerDied","Data":"62ffa08ee07006ff80042c11b1a509cbab2c996427669ba66004990bd900622e"} Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.906624 5121 scope.go:117] "RemoveContainer" containerID="ae5e3aed8cf07bc3ecc9b103c7c135b4a03b71c6a016530e8807e7a153f33e67" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.908249 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-88gft" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="registry-server" containerID="cri-o://ee2da49ed26657a93e6cb05494528f85ac3f75954ec9aecf8453d70f7e76ae38" gracePeriod=2 Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.908492 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjdc" Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.927997 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:13:20 crc kubenswrapper[5121]: I0126 00:13:20.931579 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dmjdc"] Jan 26 00:13:22 crc kubenswrapper[5121]: I0126 00:13:22.264946 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" path="/var/lib/kubelet/pods/1a5d0fd1-d832-4686-905e-ccafef0fd5cd/volumes" Jan 26 00:13:24 crc kubenswrapper[5121]: I0126 00:13:24.935990 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerID="ee2da49ed26657a93e6cb05494528f85ac3f75954ec9aecf8453d70f7e76ae38" exitCode=0 Jan 26 00:13:24 crc kubenswrapper[5121]: I0126 00:13:24.936074 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerDied","Data":"ee2da49ed26657a93e6cb05494528f85ac3f75954ec9aecf8453d70f7e76ae38"} Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.710744 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.840308 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities\") pod \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.840362 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9nk\" (UniqueName: \"kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk\") pod \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.840433 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content\") pod \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\" (UID: \"de3aec27-d9d2-46ca-b04e-b2aa4358f339\") " Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.841485 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities" (OuterVolumeSpecName: "utilities") pod "de3aec27-d9d2-46ca-b04e-b2aa4358f339" (UID: "de3aec27-d9d2-46ca-b04e-b2aa4358f339"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.859557 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk" (OuterVolumeSpecName: "kube-api-access-ck9nk") pod "de3aec27-d9d2-46ca-b04e-b2aa4358f339" (UID: "de3aec27-d9d2-46ca-b04e-b2aa4358f339"). InnerVolumeSpecName "kube-api-access-ck9nk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.941695 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.941730 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ck9nk\" (UniqueName: \"kubernetes.io/projected/de3aec27-d9d2-46ca-b04e-b2aa4358f339-kube-api-access-ck9nk\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.950556 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88gft" event={"ID":"de3aec27-d9d2-46ca-b04e-b2aa4358f339","Type":"ContainerDied","Data":"6388d81d091d68e756c451ee0e563243f0e9ada38d7b802d51e4981a69c65a6e"} Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.950612 5121 scope.go:117] "RemoveContainer" containerID="ee2da49ed26657a93e6cb05494528f85ac3f75954ec9aecf8453d70f7e76ae38" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.950800 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88gft" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.967442 5121 scope.go:117] "RemoveContainer" containerID="2e9b043c8669e50b2e80ae8032a2649b97b637970f007e2475d070f85195ee9d" Jan 26 00:13:26 crc kubenswrapper[5121]: I0126 00:13:26.984120 5121 scope.go:117] "RemoveContainer" containerID="55fa921ee2d446f9c4eb888a8fe68467ec0c7a95f3028ca0a2e67910b974fe43" Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.320577 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de3aec27-d9d2-46ca-b04e-b2aa4358f339" (UID: "de3aec27-d9d2-46ca-b04e-b2aa4358f339"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.347504 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3aec27-d9d2-46ca-b04e-b2aa4358f339-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.473288 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.473396 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.634718 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:13:27 crc kubenswrapper[5121]: I0126 00:13:27.641409 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-88gft"] Jan 26 00:13:28 crc kubenswrapper[5121]: I0126 00:13:28.265000 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" path="/var/lib/kubelet/pods/de3aec27-d9d2-46ca-b04e-b2aa4358f339/volumes" Jan 26 00:13:31 crc kubenswrapper[5121]: I0126 00:13:31.539042 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" containerID="cri-o://d221118f7e80730d9602701da654fc027f1f7b7f0224698f83da1c05b0f84ec2" gracePeriod=15 Jan 26 00:13:31 crc kubenswrapper[5121]: I0126 00:13:31.802010 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:31 crc kubenswrapper[5121]: I0126 00:13:31.802099 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:32 crc kubenswrapper[5121]: I0126 00:13:32.616309 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-759d785f59-zxh49"] Jan 26 00:13:32 crc kubenswrapper[5121]: I0126 00:13:32.616653 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" containerName="controller-manager" containerID="cri-o://823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8" gracePeriod=30 Jan 26 00:13:32 crc kubenswrapper[5121]: I0126 00:13:32.649101 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz"] Jan 26 00:13:32 crc kubenswrapper[5121]: I0126 00:13:32.659966 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" containerID="cri-o://8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee" gracePeriod=30 Jan 26 00:13:33 crc kubenswrapper[5121]: I0126 00:13:33.265266 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6ztm9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 00:13:33 crc kubenswrapper[5121]: I0126 00:13:33.265572 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.000259 5121 generic.go:358] "Generic (PLEG): container finished" podID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" containerID="a033e685a3035e7502669160a363774731135008c8bcb6ed59679dad5a6da2d9" exitCode=0 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.000336 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-n6btg" event={"ID":"413e3cab-21d5-4c17-9ac8-4cfb8602343c","Type":"ContainerDied","Data":"a033e685a3035e7502669160a363774731135008c8bcb6ed59679dad5a6da2d9"} Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.275624 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276517 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" gracePeriod=15 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276572 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" gracePeriod=15 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276620 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" gracePeriod=15 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276619 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" gracePeriod=15 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276623 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" gracePeriod=15 Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.276855 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277440 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277459 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277468 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277476 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277485 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277491 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277498 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277504 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277512 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277517 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277524 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277529 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277540 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277548 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277555 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277560 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277574 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277579 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277586 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277591 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277605 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277611 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277625 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277630 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="extract-content" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277636 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277651 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277656 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277661 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277671 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277678 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277687 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277692 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277701 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277746 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277792 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277802 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277813 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277819 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277825 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277832 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277840 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277846 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277856 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277861 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="extract-utilities" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277871 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52a8e246-af61-41fb-9732-b7e4e2777d4e" containerName="pruner" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277878 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a8e246-af61-41fb-9732-b7e4e2777d4e" containerName="pruner" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.277990 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278005 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278013 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278020 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278031 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278039 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a5d0fd1-d832-4686-905e-ccafef0fd5cd" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278048 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="946bd7f5-92cd-435d-9ff8-72af506917be" containerName="kube-multus-additional-cni-plugins" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278061 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="de3aec27-d9d2-46ca-b04e-b2aa4358f339" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278071 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="52a8e246-af61-41fb-9732-b7e4e2777d4e" containerName="pruner" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278079 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278089 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278098 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="77411de1-0221-4222-b0f1-33d1beba40ad" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278105 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="42a04527-f4f6-4570-8b32-08c2e4515c41" containerName="registry-server" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278276 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278287 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278389 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:34 crc kubenswrapper[5121]: I0126 00:13:34.278400 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.374936 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.379785 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.380647 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.381483 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.381876 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:36 crc kubenswrapper[5121]: I0126 00:13:36.381982 5121 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.382249 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.583238 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.870668 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:13:36 crc kubenswrapper[5121]: E0126 00:13:36.984565 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 26 00:13:37 crc kubenswrapper[5121]: I0126 00:13:37.472890 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:37 crc kubenswrapper[5121]: I0126 00:13:37.472969 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:37 crc kubenswrapper[5121]: E0126 00:13:37.786210 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 26 00:13:37 crc kubenswrapper[5121]: I0126 00:13:37.970824 5121 patch_prober.go:28] interesting pod/controller-manager-759d785f59-zxh49 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 26 00:13:37 crc kubenswrapper[5121]: I0126 00:13:37.971137 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5121]: I0126 00:13:38.051882 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 26 00:13:38 crc kubenswrapper[5121]: I0126 00:13:38.052446 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 26 00:13:38 crc kubenswrapper[5121]: I0126 00:13:38.982273 5121 patch_prober.go:28] interesting pod/route-controller-manager-5d466c5775-s9khz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 26 00:13:38 crc kubenswrapper[5121]: I0126 00:13:38.982339 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 26 00:13:39 crc kubenswrapper[5121]: E0126 00:13:39.387222 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 26 00:13:40 crc kubenswrapper[5121]: I0126 00:13:40.262525 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:40 crc kubenswrapper[5121]: I0126 00:13:40.263193 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:41 crc kubenswrapper[5121]: E0126 00:13:41.511177 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:13:41 crc kubenswrapper[5121]: I0126 00:13:41.864468 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:13:41 crc kubenswrapper[5121]: I0126 00:13:41.864566 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:13:42 crc kubenswrapper[5121]: E0126 00:13:42.623276 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="6.4s" Jan 26 00:13:43 crc kubenswrapper[5121]: I0126 00:13:43.052785 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 26 00:13:43 crc kubenswrapper[5121]: I0126 00:13:43.053358 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 26 00:13:43 crc kubenswrapper[5121]: I0126 00:13:43.265868 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-6ztm9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 26 00:13:43 crc kubenswrapper[5121]: I0126 00:13:43.266064 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 26 00:13:45 crc kubenswrapper[5121]: I0126 00:13:45.230000 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-66458b6674-6ztm9_387d3abf-783f-4184-81db-2fa8fa54ffc8/oauth-openshift/0.log" Jan 26 00:13:45 crc kubenswrapper[5121]: I0126 00:13:45.230100 5121 generic.go:358] "Generic (PLEG): container finished" podID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerID="d221118f7e80730d9602701da654fc027f1f7b7f0224698f83da1c05b0f84ec2" exitCode=-1 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.256484 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" event={"ID":"387d3abf-783f-4184-81db-2fa8fa54ffc8","Type":"ContainerDied","Data":"d221118f7e80730d9602701da654fc027f1f7b7f0224698f83da1c05b0f84ec2"} Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.257462 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.532914 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.532969 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.533060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.533122 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.533172 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634001 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634236 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634321 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634415 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634449 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634382 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634929 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.634969 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.635416 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.636721 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.637507 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.638096 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.638543 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: E0126 00:13:46.693444 5121 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-conmon-14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-conmon-ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-conmon-d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-conmon-63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.734957 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735336 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735453 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735572 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735693 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735107 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735417 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.735539 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.736248 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.736468 5121 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.736509 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.736521 5121 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.736530 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.739651 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.748288 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.749718 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.750333 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.750360 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.750370 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.750379 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" exitCode=2 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.750387 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.752375 5121 generic.go:358] "Generic (PLEG): container finished" podID="11ca6370-efa7-43a5-ba4d-871d77330707" containerID="823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.756426 5121 generic.go:358] "Generic (PLEG): container finished" podID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerID="8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee" exitCode=0 Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.801858 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.803847 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.804310 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.837033 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47l46\" (UniqueName: \"kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46\") pod \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.837611 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca\") pod \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\" (UID: \"413e3cab-21d5-4c17-9ac8-4cfb8602343c\") " Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.837897 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.838234 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca" (OuterVolumeSpecName: "serviceca") pod "413e3cab-21d5-4c17-9ac8-4cfb8602343c" (UID: "413e3cab-21d5-4c17-9ac8-4cfb8602343c"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.847256 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46" (OuterVolumeSpecName: "kube-api-access-47l46") pod "413e3cab-21d5-4c17-9ac8-4cfb8602343c" (UID: "413e3cab-21d5-4c17-9ac8-4cfb8602343c"). InnerVolumeSpecName "kube-api-access-47l46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.940228 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47l46\" (UniqueName: \"kubernetes.io/projected/413e3cab-21d5-4c17-9ac8-4cfb8602343c-kube-api-access-47l46\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.940263 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/413e3cab-21d5-4c17-9ac8-4cfb8602343c-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.966126 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.966691 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.966932 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.967365 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.973266 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.973772 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.974063 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.974323 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.974597 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.979272 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.979772 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.980296 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.980501 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.980665 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:46 crc kubenswrapper[5121]: I0126 00:13:46.980930 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041709 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert\") pod \"7aeaa242-0f5c-4494-b383-0d78f9d74243\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041773 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041799 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041822 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041849 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041866 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqsxd\" (UniqueName: \"kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041894 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.041915 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.043607 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca\") pod \"7aeaa242-0f5c-4494-b383-0d78f9d74243\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.043741 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.043796 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044073 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044221 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044331 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044377 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp\") pod \"7aeaa242-0f5c-4494-b383-0d78f9d74243\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044469 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config" (OuterVolumeSpecName: "config") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044494 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044502 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044545 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert\") pod \"11ca6370-efa7-43a5-ba4d-871d77330707\" (UID: \"11ca6370-efa7-43a5-ba4d-871d77330707\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044605 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjhnw\" (UniqueName: \"kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044670 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config\") pod \"7aeaa242-0f5c-4494-b383-0d78f9d74243\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044697 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phrsr\" (UniqueName: \"kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr\") pod \"7aeaa242-0f5c-4494-b383-0d78f9d74243\" (UID: \"7aeaa242-0f5c-4494-b383-0d78f9d74243\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044699 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp" (OuterVolumeSpecName: "tmp") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044724 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044774 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044799 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044843 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp" (OuterVolumeSpecName: "tmp") pod "7aeaa242-0f5c-4494-b383-0d78f9d74243" (UID: "7aeaa242-0f5c-4494-b383-0d78f9d74243"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044856 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.044928 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template\") pod \"387d3abf-783f-4184-81db-2fa8fa54ffc8\" (UID: \"387d3abf-783f-4184-81db-2fa8fa54ffc8\") " Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045401 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca" (OuterVolumeSpecName: "client-ca") pod "7aeaa242-0f5c-4494-b383-0d78f9d74243" (UID: "7aeaa242-0f5c-4494-b383-0d78f9d74243"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045607 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045625 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/11ca6370-efa7-43a5-ba4d-871d77330707-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045638 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7aeaa242-0f5c-4494-b383-0d78f9d74243-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045650 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.045661 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.046072 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.046266 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.046310 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca" (OuterVolumeSpecName: "client-ca") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.046995 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047090 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047155 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config" (OuterVolumeSpecName: "config") pod "7aeaa242-0f5c-4494-b383-0d78f9d74243" (UID: "7aeaa242-0f5c-4494-b383-0d78f9d74243"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047163 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7aeaa242-0f5c-4494-b383-0d78f9d74243" (UID: "7aeaa242-0f5c-4494-b383-0d78f9d74243"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047189 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047238 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.047463 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.048290 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.048476 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd" (OuterVolumeSpecName: "kube-api-access-hqsxd") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "kube-api-access-hqsxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.048753 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.049474 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.050000 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw" (OuterVolumeSpecName: "kube-api-access-cjhnw") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "kube-api-access-cjhnw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.050431 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.050568 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11ca6370-efa7-43a5-ba4d-871d77330707" (UID: "11ca6370-efa7-43a5-ba4d-871d77330707"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.050788 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr" (OuterVolumeSpecName: "kube-api-access-phrsr") pod "7aeaa242-0f5c-4494-b383-0d78f9d74243" (UID: "7aeaa242-0f5c-4494-b383-0d78f9d74243"). InnerVolumeSpecName "kube-api-access-phrsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.050917 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.051160 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "387d3abf-783f-4184-81db-2fa8fa54ffc8" (UID: "387d3abf-783f-4184-81db-2fa8fa54ffc8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.146966 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147366 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147501 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aeaa242-0f5c-4494-b383-0d78f9d74243-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147585 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147666 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147741 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147882 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqsxd\" (UniqueName: \"kubernetes.io/projected/11ca6370-efa7-43a5-ba4d-871d77330707-kube-api-access-hqsxd\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.147970 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148056 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148136 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148213 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11ca6370-efa7-43a5-ba4d-871d77330707-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148303 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148384 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148463 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ca6370-efa7-43a5-ba4d-871d77330707-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148537 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cjhnw\" (UniqueName: \"kubernetes.io/projected/387d3abf-783f-4184-81db-2fa8fa54ffc8-kube-api-access-cjhnw\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148616 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aeaa242-0f5c-4494-b383-0d78f9d74243-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148688 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phrsr\" (UniqueName: \"kubernetes.io/projected/7aeaa242-0f5c-4494-b383-0d78f9d74243-kube-api-access-phrsr\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148751 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148866 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/387d3abf-783f-4184-81db-2fa8fa54ffc8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.148931 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/387d3abf-783f-4184-81db-2fa8fa54ffc8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.269639 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:13:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:13:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:13:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:13:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.271469 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.272038 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.272563 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.273393 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: E0126 00:13:47.273424 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.472254 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.472670 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.765959 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5f4c25e-df23-4d49-843a-918cbb36df1c" containerID="bbb69b4c5900b333f0baea99394226a13f53019e8ec8710e9ad76016b28e2818" exitCode=0 Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.772965 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 26 00:13:47 crc kubenswrapper[5121]: I0126 00:13:47.774659 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465024 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465487 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465543 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-n6btg" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465799 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465801 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465845 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465893 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.465967 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.466111 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.466362 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.467061 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.467317 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.470315 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.471081 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.471470 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.471998 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.472436 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.472672 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.472996 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.479262 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483425 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" event={"ID":"11ca6370-efa7-43a5-ba4d-871d77330707","Type":"ContainerDied","Data":"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483496 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-jxx48" event={"ID":"75e2dc1c-f659-4dc2-a18d-141f468e666a","Type":"ContainerStarted","Data":"548a28a2ea997538804b9ce9f750628bb870818fab8844595ae63cc86d1b7b7d"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483517 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" event={"ID":"7aeaa242-0f5c-4494-b383-0d78f9d74243","Type":"ContainerDied","Data":"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483534 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" event={"ID":"11ca6370-efa7-43a5-ba4d-871d77330707","Type":"ContainerDied","Data":"cd8451aaa5b0b8eeda2735957d14bc3a1b0326f4400db3325e26007edab6cbc9"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483548 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d5f4c25e-df23-4d49-843a-918cbb36df1c","Type":"ContainerDied","Data":"bbb69b4c5900b333f0baea99394226a13f53019e8ec8710e9ad76016b28e2818"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483573 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" event={"ID":"7aeaa242-0f5c-4494-b383-0d78f9d74243","Type":"ContainerDied","Data":"e7ab71fbc984f9cc6a5a0c4aa856d3c4c2c7c2e17dd386feaa679fe69247cebb"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483589 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" event={"ID":"387d3abf-783f-4184-81db-2fa8fa54ffc8","Type":"ContainerDied","Data":"c4d4a744af559ff847df5e8610e7dadd3c81c46c4370be0f7fcf526e6800c541"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483606 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-n6btg" event={"ID":"413e3cab-21d5-4c17-9ac8-4cfb8602343c","Type":"ContainerDied","Data":"6d5775464e980aba9fa20459608be06b62c894f6d9cf800017c88a4533b62754"} Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.483624 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5775464e980aba9fa20459608be06b62c894f6d9cf800017c88a4533b62754" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.484213 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.484915 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.485471 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.485746 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.486585 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.488041 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.488561 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.502284 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.503131 5121 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.509322 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.537861 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.538093 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.538427 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.538771 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.538975 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.539181 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.539479 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.539645 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.539827 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.539979 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.540128 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.540272 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.552732 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.552992 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.553141 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.553330 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.553560 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.553744 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.553960 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.554225 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.554434 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.554577 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.554738 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.554972 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.555216 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.555387 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.585914 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.669612 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.669664 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.669708 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.669749 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.669790 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.683981 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.700336 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.716252 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.733730 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771444 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771504 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771559 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771561 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771603 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771657 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771652 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771682 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771867 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.771913 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.782900 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.783382 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-jxx48 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.783461 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-jxx48" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.800545 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.801279 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.801323 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} err="failed to get container status \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.801364 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.801812 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.801856 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} err="failed to get container status \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.801885 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.802481 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.802511 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} err="failed to get container status \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.802524 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.802781 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.802814 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} err="failed to get container status \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.802832 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.804183 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.804724 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.804751 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} err="failed to get container status \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.804797 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.806882 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.806930 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} err="failed to get container status \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.806957 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.807514 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.807540 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} err="failed to get container status \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.807557 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.807941 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} err="failed to get container status \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.807969 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808336 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} err="failed to get container status \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808356 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808536 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} err="failed to get container status \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808553 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808782 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} err="failed to get container status \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.808812 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.809074 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} err="failed to get container status \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.809092 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.809540 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} err="failed to get container status \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.809564 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810066 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} err="failed to get container status \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810084 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810405 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} err="failed to get container status \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810429 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810746 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} err="failed to get container status \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.810786 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811027 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} err="failed to get container status \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811045 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811332 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} err="failed to get container status \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811354 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811539 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} err="failed to get container status \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811563 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811906 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} err="failed to get container status \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.811933 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812193 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} err="failed to get container status \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812210 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812487 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} err="failed to get container status \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812507 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812709 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} err="failed to get container status \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812731 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812929 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} err="failed to get container status \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.812952 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.813287 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} err="failed to get container status \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.813309 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.814966 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} err="failed to get container status \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.815015 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.815549 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} err="failed to get container status \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.815601 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.816010 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} err="failed to get container status \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.816032 5121 scope.go:117] "RemoveContainer" containerID="823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8" Jan 26 00:13:48 crc kubenswrapper[5121]: W0126 00:13:48.838596 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-3c00a32727ff15854fcdab41f7def4b84d5c566b0530035a4dc6e0b541475b1f WatchSource:0}: Error finding container 3c00a32727ff15854fcdab41f7def4b84d5c566b0530035a4dc6e0b541475b1f: Status 404 returned error can't find the container with id 3c00a32727ff15854fcdab41f7def4b84d5c566b0530035a4dc6e0b541475b1f Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.901561 5121 scope.go:117] "RemoveContainer" containerID="8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.918107 5121 scope.go:117] "RemoveContainer" containerID="823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.918779 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8\": container with ID starting with 823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8 not found: ID does not exist" containerID="823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.918846 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8"} err="failed to get container status \"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8\": rpc error: code = NotFound desc = could not find container \"823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8\": container with ID starting with 823b8ede26ff49a7adc6de37b21a39f56fe582c43f3fcdf9af529fd39c4609b8 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.918885 5121 scope.go:117] "RemoveContainer" containerID="8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee" Jan 26 00:13:48 crc kubenswrapper[5121]: E0126 00:13:48.919778 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee\": container with ID starting with 8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee not found: ID does not exist" containerID="8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.919854 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee"} err="failed to get container status \"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee\": rpc error: code = NotFound desc = could not find container \"8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee\": container with ID starting with 8b9d8b55d23bddbeac17cb5c601fe70719624deb338783a6a4cae5d9d3131cee not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.919890 5121 scope.go:117] "RemoveContainer" containerID="d221118f7e80730d9602701da654fc027f1f7b7f0224698f83da1c05b0f84ec2" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.945737 5121 scope.go:117] "RemoveContainer" containerID="ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.946279 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257"} err="failed to get container status \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": rpc error: code = NotFound desc = could not find container \"ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257\": container with ID starting with ffe934e0fd41033dc7170d8d6a0378f54df22814f0d3cc9dbf198a987956c257 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.946305 5121 scope.go:117] "RemoveContainer" containerID="f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.946679 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f"} err="failed to get container status \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": rpc error: code = NotFound desc = could not find container \"f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f\": container with ID starting with f46d0d08da4165d98365f696570f84e75a4a24dc1b0b5fbaed54a1834980261f not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.946727 5121 scope.go:117] "RemoveContainer" containerID="d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947183 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3"} err="failed to get container status \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": rpc error: code = NotFound desc = could not find container \"d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3\": container with ID starting with d9b28c7930d087ac5f172ad00f7307b2b6af1ec8ffa86000c11a481cfab338d3 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947239 5121 scope.go:117] "RemoveContainer" containerID="14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947523 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166"} err="failed to get container status \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": rpc error: code = NotFound desc = could not find container \"14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166\": container with ID starting with 14d56b18e64dd0d7ade0ae02e36c3a9dbf561f141b09a6ed2b80e575bc0d0166 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947542 5121 scope.go:117] "RemoveContainer" containerID="f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947784 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed"} err="failed to get container status \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": rpc error: code = NotFound desc = could not find container \"f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed\": container with ID starting with f15b836f7f07cb5a40b136b7cf62cfffb43bf5ae5a62fe7b77f5de8c04ae51ed not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.947811 5121 scope.go:117] "RemoveContainer" containerID="63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.948130 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479"} err="failed to get container status \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": rpc error: code = NotFound desc = could not find container \"63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479\": container with ID starting with 63397c7c4e6ead5c1b9555620a72a75c30098e6a7f26146d139aa25f78ea3479 not found: ID does not exist" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.948152 5121 scope.go:117] "RemoveContainer" containerID="bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f" Jan 26 00:13:48 crc kubenswrapper[5121]: I0126 00:13:48.948403 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f"} err="failed to get container status \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": rpc error: code = NotFound desc = could not find container \"bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f\": container with ID starting with bd02189d287f95680479073f998880ef9a988304119cf5941ff3049bbaabb47f not found: ID does not exist" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.016015 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.016905 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.017368 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.018008 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.018270 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.018548 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.018967 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.019481 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: E0126 00:13:49.025154 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.076204 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir\") pod \"d5f4c25e-df23-4d49-843a-918cbb36df1c\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.076399 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access\") pod \"d5f4c25e-df23-4d49-843a-918cbb36df1c\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.076393 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5f4c25e-df23-4d49-843a-918cbb36df1c" (UID: "d5f4c25e-df23-4d49-843a-918cbb36df1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.076469 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock\") pod \"d5f4c25e-df23-4d49-843a-918cbb36df1c\" (UID: \"d5f4c25e-df23-4d49-843a-918cbb36df1c\") " Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.076520 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "d5f4c25e-df23-4d49-843a-918cbb36df1c" (UID: "d5f4c25e-df23-4d49-843a-918cbb36df1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.077044 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.077071 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5f4c25e-df23-4d49-843a-918cbb36df1c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.080826 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5f4c25e-df23-4d49-843a-918cbb36df1c" (UID: "d5f4c25e-df23-4d49-843a-918cbb36df1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.178696 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5f4c25e-df23-4d49-843a-918cbb36df1c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.261087 5121 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.261174 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.795032 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.795092 5121 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="5e6668a0c98be81d0ab3e7d49087ddd61adef168c6096384ab6abc679063ae21" exitCode=1 Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.795581 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"5e6668a0c98be81d0ab3e7d49087ddd61adef168c6096384ab6abc679063ae21"} Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.797903 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"3c00a32727ff15854fcdab41f7def4b84d5c566b0530035a4dc6e0b541475b1f"} Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.799565 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d5f4c25e-df23-4d49-843a-918cbb36df1c","Type":"ContainerDied","Data":"4b2dec64c65a113968bb35b4b6e71bd64a7060a1376d48488bdd250ba579ef13"} Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.799606 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2dec64c65a113968bb35b4b6e71bd64a7060a1376d48488bdd250ba579ef13" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.799585 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.812330 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.812646 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.813111 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.813410 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.813674 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.814039 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:49 crc kubenswrapper[5121]: I0126 00:13:49.814525 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.261093 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.261641 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.262292 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.262780 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.263163 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.263694 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:50 crc kubenswrapper[5121]: I0126 00:13:50.264174 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:51 crc kubenswrapper[5121]: E0126 00:13:51.512425 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.074177 5121 scope.go:117] "RemoveContainer" containerID="5e6668a0c98be81d0ab3e7d49087ddd61adef168c6096384ab6abc679063ae21" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.074703 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.075858 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.076685 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.077030 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.077953 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.078560 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.078961 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.090169 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-jxx48" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.090821 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.091431 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.091817 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.092126 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.092426 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.092725 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[5121]: I0126 00:13:52.093078 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:53 crc kubenswrapper[5121]: I0126 00:13:53.473105 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:13:56 crc kubenswrapper[5121]: E0126 00:13:56.027297 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:13:56 crc kubenswrapper[5121]: I0126 00:13:56.851370 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:13:56 crc kubenswrapper[5121]: I0126 00:13:56.851894 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"25d3b56bf2e1cfd458318d1f7d7053d2f421324cb4f27faa78199f78db9e4cf0"} Jan 26 00:13:56 crc kubenswrapper[5121]: I0126 00:13:56.853873 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41"} Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.859554 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:57 crc kubenswrapper[5121]: E0126 00:13:57.860283 5121 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.860292 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.860556 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.860816 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.861061 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.861319 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.861604 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.861876 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.862144 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.862353 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.862558 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.862841 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.863118 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.863377 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:13:57 crc kubenswrapper[5121]: I0126 00:13:57.863601 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.260303 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.260645 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.262782 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.263100 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.263446 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.264071 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.264451 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.264828 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.265197 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.265399 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.265642 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.265988 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.266336 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.267056 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.267394 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.283951 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.284001 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:00 crc kubenswrapper[5121]: E0126 00:14:00.284687 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.285206 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:00 crc kubenswrapper[5121]: I0126 00:14:00.882973 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d2a0386b1d46449c5cf86cd14aad24bb83eda2816c7f7f54c53b84dc1f321f31"} Jan 26 00:14:01 crc kubenswrapper[5121]: E0126 00:14:01.514343 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:14:01 crc kubenswrapper[5121]: I0126 00:14:01.802670 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:14:01 crc kubenswrapper[5121]: I0126 00:14:01.802810 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.485866 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.490964 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.491941 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.493422 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.493724 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.494021 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.494373 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.494698 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:02 crc kubenswrapper[5121]: I0126 00:14:02.495042 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:03 crc kubenswrapper[5121]: E0126 00:14:03.029191 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:14:03 crc kubenswrapper[5121]: I0126 00:14:03.358261 5121 patch_prober.go:28] interesting pod/package-server-manager-77f986bd66-hkvjl container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 00:14:03 crc kubenswrapper[5121]: I0126 00:14:03.358521 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-hkvjl" podUID="25b4983a-dbb4-499e-9b78-ef637f425116" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 00:14:03 crc kubenswrapper[5121]: I0126 00:14:03.473058 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:10 crc kubenswrapper[5121]: E0126 00:14:10.030686 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.272012 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.272564 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.272991 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.273301 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.273680 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.274004 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.274317 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:10.274605 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: E0126 00:14:11.515406 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.647625 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.648474 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.649294 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.650200 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.650717 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.651354 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.651753 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.652160 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:12.652429 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: E0126 00:14:17.032025 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.260701 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.261733 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.262492 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.262837 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.263145 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.263414 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.263727 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:20.263990 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:21.067292 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b2c6cf283c3cf64b4237950ca1d11546c811539501fc62d9477dd5ad4756586d"} Jan 26 00:14:24 crc kubenswrapper[5121]: E0126 00:14:21.516870 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/events/downloads-747b44746d-jxx48.188e1f79af72a219\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{downloads-747b44746d-jxx48.188e1f79af72a219 openshift-console 38842 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:downloads-747b44746d-jxx48,UID:75e2dc1c-f659-4dc2-a18d-141f468e666a,APIVersion:v1,ResourceVersion:36798,FieldPath:spec.containers{download-server},},Reason:Created,Message:Created container: download-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:11:51 +0000 UTC,LastTimestamp:2026-01-26 00:13:36.869875925 +0000 UTC m=+248.029077050,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.083623 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.084265 5121 generic.go:358] "Generic (PLEG): container finished" podID="fc4541ce-7789-4670-bc75-5c2868e52ce0" containerID="7f02aa8ab6740f6cdf5e0536f5661b5f7e67bd30343de05c40644acd4b1d091e" exitCode=1 Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.084470 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerDied","Data":"7f02aa8ab6740f6cdf5e0536f5661b5f7e67bd30343de05c40644acd4b1d091e"} Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.085290 5121 scope.go:117] "RemoveContainer" containerID="7f02aa8ab6740f6cdf5e0536f5661b5f7e67bd30343de05c40644acd4b1d091e" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.086532 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.086834 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.087289 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.088170 5121 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.088885 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.089468 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.089873 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.090356 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: I0126 00:14:23.090646 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:24 crc kubenswrapper[5121]: E0126 00:14:24.033483 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="7s" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.105985 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.107191 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c5668259d7e26d3d094c7a1a0f5d0c08c9f6f2d492299a61d617093e40ef56f0"} Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.109066 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.109814 5121 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.110215 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.110820 5121 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="b2c6cf283c3cf64b4237950ca1d11546c811539501fc62d9477dd5ad4756586d" exitCode=0 Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.110900 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"b2c6cf283c3cf64b4237950ca1d11546c811539501fc62d9477dd5ad4756586d"} Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.111218 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.111623 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.111749 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.111925 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.112175 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.112500 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: E0126 00:14:25.112632 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.112929 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.113318 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.114303 5121 status_manager.go:895] "Failed to get status for pod" podUID="75e2dc1c-f659-4dc2-a18d-141f468e666a" pod="openshift-console/downloads-747b44746d-jxx48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-jxx48\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.114785 5121 status_manager.go:895] "Failed to get status for pod" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.115428 5121 status_manager.go:895] "Failed to get status for pod" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" pod="openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d466c5775-s9khz\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.117032 5121 status_manager.go:895] "Failed to get status for pod" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" pod="openshift-controller-manager/controller-manager-759d785f59-zxh49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-759d785f59-zxh49\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.118247 5121 status_manager.go:895] "Failed to get status for pod" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" pod="openshift-authentication/oauth-openshift-66458b6674-6ztm9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-6ztm9\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.118579 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.118815 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.119251 5121 status_manager.go:895] "Failed to get status for pod" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-dgvkt\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:25 crc kubenswrapper[5121]: I0126 00:14:25.119636 5121 status_manager.go:895] "Failed to get status for pod" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" pod="openshift-image-registry/image-pruner-29489760-n6btg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29489760-n6btg\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 26 00:14:26 crc kubenswrapper[5121]: I0126 00:14:26.120068 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"38b578da63873baf7f9bceffc61d360129995c806dee0ef734d522944a7f277f"} Jan 26 00:14:26 crc kubenswrapper[5121]: I0126 00:14:26.120585 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a4bb200481f5cd0e3c3bd896d43891a1629dd61ea46f20b866a85381ab39467a"} Jan 26 00:14:27 crc kubenswrapper[5121]: I0126 00:14:27.129882 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c03d0bc1e740e843ff15721735ba540a036cfa424c4f5c5f725d931a4d3c73a6"} Jan 26 00:14:27 crc kubenswrapper[5121]: I0126 00:14:27.129945 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ab4056b8991361b45221f0f70ded6280488c142680821327e08e1061a711df57"} Jan 26 00:14:28 crc kubenswrapper[5121]: I0126 00:14:28.155564 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a2b09891a03e63cd2fab55a6465ffe5b2a231abe6bb3e2e4b90cdadd2924550c"} Jan 26 00:14:28 crc kubenswrapper[5121]: I0126 00:14:28.156293 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:28 crc kubenswrapper[5121]: I0126 00:14:28.157008 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:28 crc kubenswrapper[5121]: I0126 00:14:28.157127 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.285398 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.285466 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.293842 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.591438 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.592350 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.595680 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:30 crc kubenswrapper[5121]: I0126 00:14:30.596273 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:14:31 crc kubenswrapper[5121]: I0126 00:14:31.803176 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:14:31 crc kubenswrapper[5121]: I0126 00:14:31.804022 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:14:31 crc kubenswrapper[5121]: I0126 00:14:31.804158 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:14:31 crc kubenswrapper[5121]: I0126 00:14:31.805198 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:14:31 crc kubenswrapper[5121]: I0126 00:14:31.805274 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d" gracePeriod=600 Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.070618 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.176338 5121 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.176382 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.231599 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d" exitCode=0 Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.231688 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d"} Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.232746 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.232783 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.238642 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:14:33 crc kubenswrapper[5121]: I0126 00:14:33.241916 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="df368184-111c-4cae-910f-0b0ebb78dd60" Jan 26 00:14:34 crc kubenswrapper[5121]: I0126 00:14:34.243684 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6"} Jan 26 00:14:34 crc kubenswrapper[5121]: I0126 00:14:34.244564 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:34 crc kubenswrapper[5121]: I0126 00:14:34.244592 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:14:34 crc kubenswrapper[5121]: I0126 00:14:34.248008 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="df368184-111c-4cae-910f-0b0ebb78dd60" Jan 26 00:14:36 crc kubenswrapper[5121]: I0126 00:14:36.264942 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerID="ced3b461a50436368935d9b9ef9c293d0eb80a3a47e55938bf8d8741f81d8d7c" exitCode=0 Jan 26 00:14:36 crc kubenswrapper[5121]: I0126 00:14:36.265033 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"ced3b461a50436368935d9b9ef9c293d0eb80a3a47e55938bf8d8741f81d8d7c"} Jan 26 00:14:36 crc kubenswrapper[5121]: I0126 00:14:36.266296 5121 scope.go:117] "RemoveContainer" containerID="ced3b461a50436368935d9b9ef9c293d0eb80a3a47e55938bf8d8741f81d8d7c" Jan 26 00:14:37 crc kubenswrapper[5121]: I0126 00:14:37.277173 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerStarted","Data":"7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c"} Jan 26 00:14:37 crc kubenswrapper[5121]: I0126 00:14:37.278411 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:14:37 crc kubenswrapper[5121]: I0126 00:14:37.279154 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:14:37 crc kubenswrapper[5121]: I0126 00:14:37.279241 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:14:38 crc kubenswrapper[5121]: I0126 00:14:38.286004 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/1.log" Jan 26 00:14:38 crc kubenswrapper[5121]: I0126 00:14:38.286452 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" exitCode=1 Jan 26 00:14:38 crc kubenswrapper[5121]: I0126 00:14:38.286560 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c"} Jan 26 00:14:38 crc kubenswrapper[5121]: I0126 00:14:38.286623 5121 scope.go:117] "RemoveContainer" containerID="ced3b461a50436368935d9b9ef9c293d0eb80a3a47e55938bf8d8741f81d8d7c" Jan 26 00:14:38 crc kubenswrapper[5121]: I0126 00:14:38.287077 5121 scope.go:117] "RemoveContainer" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" Jan 26 00:14:38 crc kubenswrapper[5121]: E0126 00:14:38.287642 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:14:39 crc kubenswrapper[5121]: I0126 00:14:39.390312 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/1.log" Jan 26 00:14:39 crc kubenswrapper[5121]: I0126 00:14:39.391566 5121 scope.go:117] "RemoveContainer" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" Jan 26 00:14:39 crc kubenswrapper[5121]: E0126 00:14:39.392035 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:14:44 crc kubenswrapper[5121]: I0126 00:14:44.087041 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:14:44 crc kubenswrapper[5121]: I0126 00:14:44.088457 5121 scope.go:117] "RemoveContainer" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" Jan 26 00:14:44 crc kubenswrapper[5121]: E0126 00:14:44.088837 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.256911 5121 scope.go:117] "RemoveContainer" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.563651 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/1.log" Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.563849 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerStarted","Data":"3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a"} Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.564404 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.566307 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:14:54 crc kubenswrapper[5121]: I0126 00:14:54.566463 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.570257 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/2.log" Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.570958 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/1.log" Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.570994 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" exitCode=1 Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.571186 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a"} Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.571228 5121 scope.go:117] "RemoveContainer" containerID="7814caaeb569f74a5d40d611d2b1afd33dff61454c8426166cd689136f11183c" Jan 26 00:14:55 crc kubenswrapper[5121]: I0126 00:14:55.571683 5121 scope.go:117] "RemoveContainer" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" Jan 26 00:14:55 crc kubenswrapper[5121]: E0126 00:14:55.572295 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:14:56 crc kubenswrapper[5121]: I0126 00:14:56.579676 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/2.log" Jan 26 00:14:56 crc kubenswrapper[5121]: I0126 00:14:56.580393 5121 scope.go:117] "RemoveContainer" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" Jan 26 00:14:56 crc kubenswrapper[5121]: E0126 00:14:56.580749 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:03 crc kubenswrapper[5121]: I0126 00:15:03.329612 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 26 00:15:04 crc kubenswrapper[5121]: I0126 00:15:04.087196 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:15:04 crc kubenswrapper[5121]: I0126 00:15:04.087898 5121 scope.go:117] "RemoveContainer" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" Jan 26 00:15:04 crc kubenswrapper[5121]: E0126 00:15:04.088170 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:05 crc kubenswrapper[5121]: I0126 00:15:05.255355 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:15:09 crc kubenswrapper[5121]: I0126 00:15:09.937225 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:10 crc kubenswrapper[5121]: I0126 00:15:10.414045 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 26 00:15:13 crc kubenswrapper[5121]: I0126 00:15:13.684607 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 26 00:15:14 crc kubenswrapper[5121]: I0126 00:15:14.245516 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:14 crc kubenswrapper[5121]: I0126 00:15:14.731732 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 26 00:15:14 crc kubenswrapper[5121]: I0126 00:15:14.881029 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 26 00:15:14 crc kubenswrapper[5121]: I0126 00:15:14.900504 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.173234 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.256329 5121 scope.go:117] "RemoveContainer" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.742355 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/3.log" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.744018 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/2.log" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.744090 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" exitCode=1 Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.744155 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12"} Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.744217 5121 scope.go:117] "RemoveContainer" containerID="3adc7fd5911195b1ce47c90ef0d75825e4035ceaf3b1b703b36d7f6e2f0bdd4a" Jan 26 00:15:15 crc kubenswrapper[5121]: I0126 00:15:15.744845 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:15:15 crc kubenswrapper[5121]: E0126 00:15:15.750646 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:16 crc kubenswrapper[5121]: I0126 00:15:16.132806 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 26 00:15:16 crc kubenswrapper[5121]: I0126 00:15:16.528363 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 26 00:15:16 crc kubenswrapper[5121]: I0126 00:15:16.752784 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/3.log" Jan 26 00:15:16 crc kubenswrapper[5121]: I0126 00:15:16.758992 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 26 00:15:17 crc kubenswrapper[5121]: I0126 00:15:17.108068 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 26 00:15:17 crc kubenswrapper[5121]: I0126 00:15:17.170252 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 26 00:15:17 crc kubenswrapper[5121]: I0126 00:15:17.578593 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 26 00:15:17 crc kubenswrapper[5121]: I0126 00:15:17.760934 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 26 00:15:18 crc kubenswrapper[5121]: I0126 00:15:18.009982 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 26 00:15:18 crc kubenswrapper[5121]: I0126 00:15:18.218973 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5121]: I0126 00:15:18.284209 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5121]: I0126 00:15:18.402865 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:18 crc kubenswrapper[5121]: I0126 00:15:18.704008 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 26 00:15:19 crc kubenswrapper[5121]: I0126 00:15:19.180080 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 26 00:15:19 crc kubenswrapper[5121]: I0126 00:15:19.271084 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 26 00:15:19 crc kubenswrapper[5121]: I0126 00:15:19.292776 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 26 00:15:21 crc kubenswrapper[5121]: I0126 00:15:21.160468 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 26 00:15:21 crc kubenswrapper[5121]: I0126 00:15:21.288655 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 26 00:15:21 crc kubenswrapper[5121]: I0126 00:15:21.540580 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:21 crc kubenswrapper[5121]: I0126 00:15:21.693311 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 26 00:15:21 crc kubenswrapper[5121]: I0126 00:15:21.849241 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 26 00:15:22 crc kubenswrapper[5121]: I0126 00:15:22.917562 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 26 00:15:24 crc kubenswrapper[5121]: I0126 00:15:24.087391 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:15:24 crc kubenswrapper[5121]: I0126 00:15:24.088326 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:15:24 crc kubenswrapper[5121]: E0126 00:15:24.089220 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:24 crc kubenswrapper[5121]: I0126 00:15:24.379899 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:24 crc kubenswrapper[5121]: I0126 00:15:24.447439 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:15:24 crc kubenswrapper[5121]: I0126 00:15:24.564460 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.109022 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.109953 5121 generic.go:358] "Generic (PLEG): container finished" podID="069690ff-331e-4ee8-bed5-24d79f939a40" containerID="a7fc03e9c26703c07aed73480a9915b133eaacb1520bc003aa5b9cf5dfbab35d" exitCode=255 Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.110082 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" event={"ID":"069690ff-331e-4ee8-bed5-24d79f939a40","Type":"ContainerDied","Data":"a7fc03e9c26703c07aed73480a9915b133eaacb1520bc003aa5b9cf5dfbab35d"} Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.111083 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:15:25 crc kubenswrapper[5121]: E0126 00:15:25.111517 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.112700 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 26 00:15:25 crc kubenswrapper[5121]: I0126 00:15:25.115368 5121 scope.go:117] "RemoveContainer" containerID="a7fc03e9c26703c07aed73480a9915b133eaacb1520bc003aa5b9cf5dfbab35d" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.044611 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49508: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.069101 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49512: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.091553 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49526: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.120350 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49534: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.121066 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.121817 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-9rgbz" event={"ID":"069690ff-331e-4ee8-bed5-24d79f939a40","Type":"ContainerStarted","Data":"db625731beab0684eabdc46868d870e020260b68ed2c378757c6d5d427d633b1"} Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.154162 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49548: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.238966 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.253546 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49558: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.439139 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49562: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.563879 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.622221 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.711819 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.785798 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49578: no serving certificate available for the kubelet" Jan 26 00:15:26 crc kubenswrapper[5121]: I0126 00:15:26.864988 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.235352 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.343457 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.459790 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49594: no serving certificate available for the kubelet" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.493063 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.495922 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.682694 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.827564 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.849845 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.885872 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 26 00:15:27 crc kubenswrapper[5121]: I0126 00:15:27.929025 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.008831 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.060948 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.079629 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.421871 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.643264 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 26 00:15:28 crc kubenswrapper[5121]: I0126 00:15:28.770211 5121 ???:1] "http: TLS handshake error from 192.168.126.11:43192: no serving certificate available for the kubelet" Jan 26 00:15:29 crc kubenswrapper[5121]: I0126 00:15:29.288546 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 26 00:15:29 crc kubenswrapper[5121]: I0126 00:15:29.412157 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 26 00:15:29 crc kubenswrapper[5121]: I0126 00:15:29.554624 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.203402 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.427155 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.685477 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.744756 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.890739 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.904385 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 26 00:15:30 crc kubenswrapper[5121]: I0126 00:15:30.964028 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.180719 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.252144 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.361129 5121 ???:1] "http: TLS handshake error from 192.168.126.11:43200: no serving certificate available for the kubelet" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.495988 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.690200 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 26 00:15:31 crc kubenswrapper[5121]: I0126 00:15:31.845844 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 26 00:15:32 crc kubenswrapper[5121]: I0126 00:15:32.104938 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:15:32 crc kubenswrapper[5121]: I0126 00:15:32.105503 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 26 00:15:32 crc kubenswrapper[5121]: I0126 00:15:32.461866 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 26 00:15:32 crc kubenswrapper[5121]: I0126 00:15:32.770902 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.063011 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.192743 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.313230 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.457299 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.672973 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.685470 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.689864 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.844872 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:33 crc kubenswrapper[5121]: I0126 00:15:33.875647 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.070461 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.121121 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.376031 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.642208 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.745953 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 26 00:15:34 crc kubenswrapper[5121]: I0126 00:15:34.839347 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 26 00:15:35 crc kubenswrapper[5121]: I0126 00:15:35.179579 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 26 00:15:35 crc kubenswrapper[5121]: I0126 00:15:35.817015 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.107830 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.158019 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.248996 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.256037 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:15:36 crc kubenswrapper[5121]: E0126 00:15:36.256547 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.308096 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.443977 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.514468 5121 ???:1] "http: TLS handshake error from 192.168.126.11:43202: no serving certificate available for the kubelet" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.669659 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 26 00:15:36 crc kubenswrapper[5121]: I0126 00:15:36.899989 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.294038 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.312158 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.565756 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.628527 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.698805 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.766315 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.846220 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.877938 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.906472 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.934857 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 26 00:15:37 crc kubenswrapper[5121]: I0126 00:15:37.964752 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.415153 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.463795 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.482017 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.627328 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.631179 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.878870 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 26 00:15:38 crc kubenswrapper[5121]: I0126 00:15:38.911461 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.176868 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.255856 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.255923 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.262069 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="df368184-111c-4cae-910f-0b0ebb78dd60" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.279933 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.550161 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.805115 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.812205 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-759d785f59-zxh49","openshift-authentication/oauth-openshift-66458b6674-6ztm9","openshift-route-controller-manager/route-controller-manager-5d466c5775-s9khz","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.812369 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-54759f584-d87tt","openshift-controller-manager/controller-manager-78698b59cb-hp7x4","openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813327 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813360 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813383 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" containerName="controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813394 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" containerName="controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813406 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" containerName="installer" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813414 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" containerName="installer" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813434 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813442 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813477 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" containerName="image-pruner" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813485 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" containerName="image-pruner" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813607 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="413e3cab-21d5-4c17-9ac8-4cfb8602343c" containerName="image-pruner" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813625 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5f4c25e-df23-4d49-843a-918cbb36df1c" containerName="installer" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813639 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" containerName="controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813648 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" containerName="route-controller-manager" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.813660 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" containerName="oauth-openshift" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.818080 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.822571 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.823555 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.823583 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.823614 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.823597 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.823612 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.825293 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.827134 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.827985 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.829835 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.830103 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.831237 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.831250 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78698b59cb-hp7x4" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.831495 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.832369 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.833617 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.955912 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.956399 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.956466 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmdbr\" (UniqueName: \"kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.956952 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.957335 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.957995 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.958473 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.961513 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.982659 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556" Jan 26 00:15:39 crc kubenswrapper[5121]: I0126 00:15:39.992830 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78698b59cb-hp7x4" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.019041 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.019012754 podStartE2EDuration="35.019012754s" podCreationTimestamp="2026-01-26 00:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:40.017929254 +0000 UTC m=+371.177130389" watchObservedRunningTime="2026-01-26 00:15:40.019012754 +0000 UTC m=+371.178213899" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.058775 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.058856 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.058890 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.058917 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4m9w\" (UniqueName: \"kubernetes.io/projected/a93d46dd-864a-4ce6-880b-bebd385ebfd5-kube-api-access-q4m9w\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.058964 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059005 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmdbr\" (UniqueName: \"kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059040 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059070 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059101 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059130 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-error\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059165 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059194 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059244 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-session\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059269 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-login\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059310 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059376 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-dir\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.059407 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-policies\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.061194 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.070984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.073938 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.081938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmdbr\" (UniqueName: \"kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr\") pod \"collect-profiles-29489775-rvsh8\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.114536 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160670 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160742 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160779 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160805 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4m9w\" (UniqueName: \"kubernetes.io/projected/a93d46dd-864a-4ce6-880b-bebd385ebfd5-kube-api-access-q4m9w\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160851 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160881 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160941 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-error\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.160980 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161011 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161072 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-session\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161096 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-login\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161139 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-dir\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161235 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-policies\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.161982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-dir\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.162410 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-audit-policies\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.162426 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.162751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.164188 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.166749 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.166786 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.166921 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-session\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.167470 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-error\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.168033 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.169157 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-template-login\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.169531 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.169939 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a93d46dd-864a-4ce6-880b-bebd385ebfd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.181448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4m9w\" (UniqueName: \"kubernetes.io/projected/a93d46dd-864a-4ce6-880b-bebd385ebfd5-kube-api-access-q4m9w\") pod \"oauth-openshift-54759f584-d87tt\" (UID: \"a93d46dd-864a-4ce6-880b-bebd385ebfd5\") " pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.244386 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.244440 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78698b59cb-hp7x4" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.244741 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.244804 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="74cedbc5-175e-4ded-8571-2fe554c6d6d6" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.265358 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ca6370-efa7-43a5-ba4d-871d77330707" path="/var/lib/kubelet/pods/11ca6370-efa7-43a5-ba4d-871d77330707/volumes" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.267259 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387d3abf-783f-4184-81db-2fa8fa54ffc8" path="/var/lib/kubelet/pods/387d3abf-783f-4184-81db-2fa8fa54ffc8/volumes" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.268242 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aeaa242-0f5c-4494-b383-0d78f9d74243" path="/var/lib/kubelet/pods/7aeaa242-0f5c-4494-b383-0d78f9d74243/volumes" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.285631 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.310561 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.361962 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.361919147 podStartE2EDuration="1m7.361919147s" podCreationTimestamp="2026-01-26 00:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:40.359894981 +0000 UTC m=+371.519096116" watchObservedRunningTime="2026-01-26 00:15:40.361919147 +0000 UTC m=+371.521120272" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.406370 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.451389 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-26h95"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.486280 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78698b59cb-hp7x4"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.486470 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78698b59cb-hp7x4"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.489573 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.492673 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.493417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.497928 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.498271 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.498293 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.498436 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.503490 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.504411 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-26h95"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.511005 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.515668 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.515862 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.519832 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.520154 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.520183 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.520195 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.520498 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.522027 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.522805 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8f4ddd-dv556"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.529387 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh"] Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.562693 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569613 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569650 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdj25\" (UniqueName: \"kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569691 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569716 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569863 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569968 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.569998 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.570051 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.570104 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.570156 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.570256 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfndn\" (UniqueName: \"kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.609614 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.672092 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.672146 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.672910 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.672991 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.672992 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673032 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfndn\" (UniqueName: \"kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673328 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673375 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdj25\" (UniqueName: \"kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673470 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673574 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673605 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.673738 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.674681 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.674888 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.675052 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.677696 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.678077 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.679266 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.683661 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.693888 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdj25\" (UniqueName: \"kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25\") pod \"controller-manager-7bb4f97b4f-26h95\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.696243 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfndn\" (UniqueName: \"kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn\") pod \"route-controller-manager-ccc77b589-szqsh\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.807341 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.810962 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.837429 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:40 crc kubenswrapper[5121]: I0126 00:15:40.861791 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.260200 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.312013 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.480204 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.483695 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.992311 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 26 00:15:41 crc kubenswrapper[5121]: I0126 00:15:41.993052 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.042656 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.117684 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.306832 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.497915 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.526413 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.827499 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.837750 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.878605 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.904944 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 26 00:15:42 crc kubenswrapper[5121]: I0126 00:15:42.968314 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.003347 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.173853 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.180373 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.272060 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" event={"ID":"bd0af7d6-e1d4-4773-91f2-1f984bf1d785","Type":"ContainerStarted","Data":"22a02dd56089519953c5daf6fe36f7b61f41ff69548bbba1c585d872ef50dd1c"} Jan 26 00:15:43 crc kubenswrapper[5121]: W0126 00:15:43.288231 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93d46dd_864a_4ce6_880b_bebd385ebfd5.slice/crio-442bff4c8001ae619b411d588a31cd2ccb1504d3395f1f3d9b8f91e7a1f50564 WatchSource:0}: Error finding container 442bff4c8001ae619b411d588a31cd2ccb1504d3395f1f3d9b8f91e7a1f50564: Status 404 returned error can't find the container with id 442bff4c8001ae619b411d588a31cd2ccb1504d3395f1f3d9b8f91e7a1f50564 Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.334670 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.344410 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.478607 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.633357 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.704996 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 26 00:15:43 crc kubenswrapper[5121]: W0126 00:15:43.713424 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dd70109_dee7_4a4a_b11d_0e5962716311.slice/crio-3336ada7c384781da613fe4b5d3c16430f134ea5c1e5ebd38b3747944f77998d WatchSource:0}: Error finding container 3336ada7c384781da613fe4b5d3c16430f134ea5c1e5ebd38b3747944f77998d: Status 404 returned error can't find the container with id 3336ada7c384781da613fe4b5d3c16430f134ea5c1e5ebd38b3747944f77998d Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.771299 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.879130 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 26 00:15:43 crc kubenswrapper[5121]: I0126 00:15:43.922214 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.006203 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.140995 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.280911 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" event={"ID":"0dd70109-dee7-4a4a-b11d-0e5962716311","Type":"ContainerStarted","Data":"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.280979 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" event={"ID":"0dd70109-dee7-4a4a-b11d-0e5962716311","Type":"ContainerStarted","Data":"3336ada7c384781da613fe4b5d3c16430f134ea5c1e5ebd38b3747944f77998d"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.282707 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.284369 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" event={"ID":"01fab69a-42d0-4eb1-8807-73b22ee7a852","Type":"ContainerStarted","Data":"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.284421 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" event={"ID":"01fab69a-42d0-4eb1-8807-73b22ee7a852","Type":"ContainerStarted","Data":"57e1eb729e1a0d01c03129f2b169d0e89a936748d7f2d9fbd8dd93962ef8cb4f"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.285059 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.287234 5121 generic.go:358] "Generic (PLEG): container finished" podID="bd0af7d6-e1d4-4773-91f2-1f984bf1d785" containerID="896806f9fec387a5cd66d6e8ad8b535a186be6e55ddcf2e0e5ee2bfdaeed334f" exitCode=0 Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.287659 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" event={"ID":"bd0af7d6-e1d4-4773-91f2-1f984bf1d785","Type":"ContainerDied","Data":"896806f9fec387a5cd66d6e8ad8b535a186be6e55ddcf2e0e5ee2bfdaeed334f"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.289380 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" event={"ID":"a93d46dd-864a-4ce6-880b-bebd385ebfd5","Type":"ContainerStarted","Data":"15c4242d83b7187a98f386ce68c8786b261e805593532619bdd80383f928ed92"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.289422 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" event={"ID":"a93d46dd-864a-4ce6-880b-bebd385ebfd5","Type":"ContainerStarted","Data":"442bff4c8001ae619b411d588a31cd2ccb1504d3395f1f3d9b8f91e7a1f50564"} Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.290506 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.307957 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" podStartSLOduration=12.307924294 podStartE2EDuration="12.307924294s" podCreationTimestamp="2026-01-26 00:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:44.304468018 +0000 UTC m=+375.463669163" watchObservedRunningTime="2026-01-26 00:15:44.307924294 +0000 UTC m=+375.467125419" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.332035 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" podStartSLOduration=12.332012724 podStartE2EDuration="12.332012724s" podCreationTimestamp="2026-01-26 00:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:44.330341238 +0000 UTC m=+375.489542373" watchObservedRunningTime="2026-01-26 00:15:44.332012724 +0000 UTC m=+375.491213849" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.338001 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.383496 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" podStartSLOduration=158.383457856 podStartE2EDuration="2m38.383457856s" podCreationTimestamp="2026-01-26 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:44.381553473 +0000 UTC m=+375.540754588" watchObservedRunningTime="2026-01-26 00:15:44.383457856 +0000 UTC m=+375.542658981" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.454399 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54759f584-d87tt" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.573923 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.701495 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.734809 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:44 crc kubenswrapper[5121]: I0126 00:15:44.949585 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.245855 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.286296 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.453008 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.632057 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.797702 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.866813 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmdbr\" (UniqueName: \"kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr\") pod \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.866940 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume\") pod \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.867011 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume\") pod \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\" (UID: \"bd0af7d6-e1d4-4773-91f2-1f984bf1d785\") " Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.868101 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd0af7d6-e1d4-4773-91f2-1f984bf1d785" (UID: "bd0af7d6-e1d4-4773-91f2-1f984bf1d785"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.875755 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd0af7d6-e1d4-4773-91f2-1f984bf1d785" (UID: "bd0af7d6-e1d4-4773-91f2-1f984bf1d785"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.875942 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr" (OuterVolumeSpecName: "kube-api-access-dmdbr") pod "bd0af7d6-e1d4-4773-91f2-1f984bf1d785" (UID: "bd0af7d6-e1d4-4773-91f2-1f984bf1d785"). InnerVolumeSpecName "kube-api-access-dmdbr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.968648 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmdbr\" (UniqueName: \"kubernetes.io/projected/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-kube-api-access-dmdbr\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.968711 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.968722 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0af7d6-e1d4-4773-91f2-1f984bf1d785-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:45 crc kubenswrapper[5121]: I0126 00:15:45.972779 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.110198 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.373629 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.373671 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-rvsh8" event={"ID":"bd0af7d6-e1d4-4773-91f2-1f984bf1d785","Type":"ContainerDied","Data":"22a02dd56089519953c5daf6fe36f7b61f41ff69548bbba1c585d872ef50dd1c"} Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.374454 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a02dd56089519953c5daf6fe36f7b61f41ff69548bbba1c585d872ef50dd1c" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.613469 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.637070 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.675794 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.783288 5121 ???:1] "http: TLS handshake error from 192.168.126.11:36730: no serving certificate available for the kubelet" Jan 26 00:15:46 crc kubenswrapper[5121]: I0126 00:15:46.990707 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.050263 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.079217 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.363711 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.622011 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.728887 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.816839 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 26 00:15:47 crc kubenswrapper[5121]: I0126 00:15:47.959471 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.142131 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.278238 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.379919 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.567315 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.590832 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.869154 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 26 00:15:48 crc kubenswrapper[5121]: I0126 00:15:48.936065 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 26 00:15:49 crc kubenswrapper[5121]: I0126 00:15:49.106010 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 26 00:15:49 crc kubenswrapper[5121]: I0126 00:15:49.169994 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:49 crc kubenswrapper[5121]: I0126 00:15:49.260695 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 26 00:15:49 crc kubenswrapper[5121]: I0126 00:15:49.298426 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 26 00:15:49 crc kubenswrapper[5121]: I0126 00:15:49.940581 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.262018 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:15:50 crc kubenswrapper[5121]: E0126 00:15:50.262315 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-926kg_openshift-marketplace(4c75b2fc-a93e-44bd-9070-7512402f3f71)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.309413 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.338941 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.343555 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.650250 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 26 00:15:50 crc kubenswrapper[5121]: I0126 00:15:50.916404 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.259161 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.357667 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.597222 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.612842 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.613715 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41" gracePeriod=5 Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.788275 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:51 crc kubenswrapper[5121]: I0126 00:15:51.981697 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.161327 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.353310 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.491195 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-26h95"] Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.491589 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" podUID="01fab69a-42d0-4eb1-8807-73b22ee7a852" containerName="controller-manager" containerID="cri-o://626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0" gracePeriod=30 Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.519299 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh"] Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.519871 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" podUID="0dd70109-dee7-4a4a-b11d-0e5962716311" containerName="route-controller-manager" containerID="cri-o://7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230" gracePeriod=30 Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.550304 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.672504 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.920015 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.958123 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 26 00:15:52 crc kubenswrapper[5121]: I0126 00:15:52.987031 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.012424 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca\") pod \"0dd70109-dee7-4a4a-b11d-0e5962716311\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.012545 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config\") pod \"0dd70109-dee7-4a4a-b11d-0e5962716311\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.012749 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert\") pod \"0dd70109-dee7-4a4a-b11d-0e5962716311\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.012835 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfndn\" (UniqueName: \"kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn\") pod \"0dd70109-dee7-4a4a-b11d-0e5962716311\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.012938 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp\") pod \"0dd70109-dee7-4a4a-b11d-0e5962716311\" (UID: \"0dd70109-dee7-4a4a-b11d-0e5962716311\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.013350 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp" (OuterVolumeSpecName: "tmp") pod "0dd70109-dee7-4a4a-b11d-0e5962716311" (UID: "0dd70109-dee7-4a4a-b11d-0e5962716311"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.013681 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca" (OuterVolumeSpecName: "client-ca") pod "0dd70109-dee7-4a4a-b11d-0e5962716311" (UID: "0dd70109-dee7-4a4a-b11d-0e5962716311"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.014093 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.014110 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0dd70109-dee7-4a4a-b11d-0e5962716311-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.014676 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config" (OuterVolumeSpecName: "config") pod "0dd70109-dee7-4a4a-b11d-0e5962716311" (UID: "0dd70109-dee7-4a4a-b11d-0e5962716311"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.031920 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.033087 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034026 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034125 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd0af7d6-e1d4-4773-91f2-1f984bf1d785" containerName="collect-profiles" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034209 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0af7d6-e1d4-4773-91f2-1f984bf1d785" containerName="collect-profiles" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034307 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0dd70109-dee7-4a4a-b11d-0e5962716311" containerName="route-controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034391 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd70109-dee7-4a4a-b11d-0e5962716311" containerName="route-controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034059 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0dd70109-dee7-4a4a-b11d-0e5962716311" (UID: "0dd70109-dee7-4a4a-b11d-0e5962716311"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034859 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.034956 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0dd70109-dee7-4a4a-b11d-0e5962716311" containerName="route-controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.035157 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd0af7d6-e1d4-4773-91f2-1f984bf1d785" containerName="collect-profiles" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.037125 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn" (OuterVolumeSpecName: "kube-api-access-cfndn") pod "0dd70109-dee7-4a4a-b11d-0e5962716311" (UID: "0dd70109-dee7-4a4a-b11d-0e5962716311"). InnerVolumeSpecName "kube-api-access-cfndn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.050714 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.071845 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.086986 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.115439 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd70109-dee7-4a4a-b11d-0e5962716311-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.115899 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dd70109-dee7-4a4a-b11d-0e5962716311-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.115958 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfndn\" (UniqueName: \"kubernetes.io/projected/0dd70109-dee7-4a4a-b11d-0e5962716311-kube-api-access-cfndn\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.121274 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.121993 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01fab69a-42d0-4eb1-8807-73b22ee7a852" containerName="controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.122017 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fab69a-42d0-4eb1-8807-73b22ee7a852" containerName="controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.122142 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="01fab69a-42d0-4eb1-8807-73b22ee7a852" containerName="controller-manager" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.129126 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.130810 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.217452 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.217734 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.217918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.217959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218016 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdj25\" (UniqueName: \"kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218167 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles\") pod \"01fab69a-42d0-4eb1-8807-73b22ee7a852\" (UID: \"01fab69a-42d0-4eb1-8807-73b22ee7a852\") " Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218281 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp" (OuterVolumeSpecName: "tmp") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218420 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218569 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218890 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config" (OuterVolumeSpecName: "config") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219080 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrzp\" (UniqueName: \"kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.218932 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca" (OuterVolumeSpecName: "client-ca") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219120 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219222 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219246 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219284 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219306 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshw4\" (UniqueName: \"kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219618 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219888 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219899 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219924 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.219947 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/01fab69a-42d0-4eb1-8807-73b22ee7a852-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.222213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25" (OuterVolumeSpecName: "kube-api-access-hdj25") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "kube-api-access-hdj25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.222544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01fab69a-42d0-4eb1-8807-73b22ee7a852" (UID: "01fab69a-42d0-4eb1-8807-73b22ee7a852"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321437 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321532 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321564 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321649 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321700 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrzp\" (UniqueName: \"kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321786 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321816 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.321858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.322897 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.323051 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.323107 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tshw4\" (UniqueName: \"kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.323196 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fab69a-42d0-4eb1-8807-73b22ee7a852-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.323230 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdj25\" (UniqueName: \"kubernetes.io/projected/01fab69a-42d0-4eb1-8807-73b22ee7a852-kube-api-access-hdj25\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.323251 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01fab69a-42d0-4eb1-8807-73b22ee7a852-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.324191 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.324226 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.324318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.324891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.325081 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.325245 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.326533 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.332009 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.332403 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.341534 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshw4\" (UniqueName: \"kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4\") pod \"route-controller-manager-749f5d557b-27svg\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.342271 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrzp\" (UniqueName: \"kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp\") pod \"controller-manager-d75c6446b-7dwng\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.369985 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.424669 5121 generic.go:358] "Generic (PLEG): container finished" podID="01fab69a-42d0-4eb1-8807-73b22ee7a852" containerID="626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0" exitCode=0 Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.424935 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" event={"ID":"01fab69a-42d0-4eb1-8807-73b22ee7a852","Type":"ContainerDied","Data":"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0"} Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.424983 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" event={"ID":"01fab69a-42d0-4eb1-8807-73b22ee7a852","Type":"ContainerDied","Data":"57e1eb729e1a0d01c03129f2b169d0e89a936748d7f2d9fbd8dd93962ef8cb4f"} Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.425009 5121 scope.go:117] "RemoveContainer" containerID="626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.425182 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-26h95" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.434799 5121 generic.go:358] "Generic (PLEG): container finished" podID="0dd70109-dee7-4a4a-b11d-0e5962716311" containerID="7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230" exitCode=0 Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.434951 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.434956 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" event={"ID":"0dd70109-dee7-4a4a-b11d-0e5962716311","Type":"ContainerDied","Data":"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230"} Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.435034 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh" event={"ID":"0dd70109-dee7-4a4a-b11d-0e5962716311","Type":"ContainerDied","Data":"3336ada7c384781da613fe4b5d3c16430f134ea5c1e5ebd38b3747944f77998d"} Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.452584 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.457376 5121 scope.go:117] "RemoveContainer" containerID="626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0" Jan 26 00:15:53 crc kubenswrapper[5121]: E0126 00:15:53.459923 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0\": container with ID starting with 626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0 not found: ID does not exist" containerID="626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.459986 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0"} err="failed to get container status \"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0\": rpc error: code = NotFound desc = could not find container \"626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0\": container with ID starting with 626daf161269351cb11ab5defe6d719daad5b09d7f26045892ce6105618efba0 not found: ID does not exist" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.460019 5121 scope.go:117] "RemoveContainer" containerID="7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.479057 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-26h95"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.487706 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-26h95"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.502818 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.505604 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-szqsh"] Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.508190 5121 scope.go:117] "RemoveContainer" containerID="7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230" Jan 26 00:15:53 crc kubenswrapper[5121]: E0126 00:15:53.509076 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230\": container with ID starting with 7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230 not found: ID does not exist" containerID="7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.509117 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230"} err="failed to get container status \"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230\": rpc error: code = NotFound desc = could not find container \"7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230\": container with ID starting with 7f06a60b6bfef5dec27c05b91ebcfc012b874d7b6219b78a7a0dcd459222e230 not found: ID does not exist" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.532169 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.570925 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 26 00:15:53 crc kubenswrapper[5121]: I0126 00:15:53.668045 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.217560 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.217610 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.230210 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.271069 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01fab69a-42d0-4eb1-8807-73b22ee7a852" path="/var/lib/kubelet/pods/01fab69a-42d0-4eb1-8807-73b22ee7a852/volumes" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.272184 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd70109-dee7-4a4a-b11d-0e5962716311" path="/var/lib/kubelet/pods/0dd70109-dee7-4a4a-b11d-0e5962716311/volumes" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.361384 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.436366 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.483566 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" event={"ID":"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c","Type":"ContainerStarted","Data":"75aedf5ce48e6109d566ea1b77c7051f9d56b06fd5814d5afbb996de2351684a"} Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.525938 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 26 00:15:54 crc kubenswrapper[5121]: I0126 00:15:54.558966 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.093535 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.281993 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.567306 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.585890 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" event={"ID":"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c","Type":"ContainerStarted","Data":"8088bba372bcf9ca9a83c26ec0328bc7d430d564677f6ce56c19ed7267962a93"} Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.586120 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.593304 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" event={"ID":"113beae1-0be4-4a89-8f92-e0af868e7708","Type":"ContainerStarted","Data":"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e"} Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.593369 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" event={"ID":"113beae1-0be4-4a89-8f92-e0af868e7708","Type":"ContainerStarted","Data":"5ad9049a9f2356ca708379d7180ea3831af45017d373de21d5446d496d9b59da"} Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.593703 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.593841 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.601432 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.619125 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" podStartSLOduration=3.61908137 podStartE2EDuration="3.61908137s" podCreationTimestamp="2026-01-26 00:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:55.613372821 +0000 UTC m=+386.772573946" watchObservedRunningTime="2026-01-26 00:15:55.61908137 +0000 UTC m=+386.778282505" Jan 26 00:15:55 crc kubenswrapper[5121]: I0126 00:15:55.649048 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" podStartSLOduration=3.649014523 podStartE2EDuration="3.649014523s" podCreationTimestamp="2026-01-26 00:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:15:55.642704127 +0000 UTC m=+386.801905262" watchObservedRunningTime="2026-01-26 00:15:55.649014523 +0000 UTC m=+386.808215648" Jan 26 00:15:56 crc kubenswrapper[5121]: I0126 00:15:56.361207 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 26 00:15:56 crc kubenswrapper[5121]: I0126 00:15:56.649115 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 26 00:15:56 crc kubenswrapper[5121]: I0126 00:15:56.677056 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.304379 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.304602 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.394819 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407386 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407492 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407531 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407583 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407604 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407645 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407677 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407748 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407878 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407967 5121 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407988 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.407999 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.421557 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.495908 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.509899 5121 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.509952 5121 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.611649 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.611714 5121 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41" exitCode=137 Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.611875 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.611987 5121 scope.go:117] "RemoveContainer" containerID="15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.638294 5121 scope.go:117] "RemoveContainer" containerID="15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41" Jan 26 00:15:57 crc kubenswrapper[5121]: E0126 00:15:57.638969 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41\": container with ID starting with 15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41 not found: ID does not exist" containerID="15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.639050 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41"} err="failed to get container status \"15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41\": rpc error: code = NotFound desc = could not find container \"15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41\": container with ID starting with 15172e1141a1e8bd8686f1fdf61ff61e3c15aeaba5c47c7dc41e59c31f564a41 not found: ID does not exist" Jan 26 00:15:57 crc kubenswrapper[5121]: I0126 00:15:57.837372 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.265297 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.265598 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.280784 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.280846 5121 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="792f9105-36d9-4c1b-ac90-07e59de2c84e" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.285839 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.285942 5121 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="792f9105-36d9-4c1b-ac90-07e59de2c84e" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.737880 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 26 00:15:58 crc kubenswrapper[5121]: I0126 00:15:58.969923 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 26 00:15:59 crc kubenswrapper[5121]: I0126 00:15:59.781332 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 26 00:16:00 crc kubenswrapper[5121]: I0126 00:16:00.568635 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 26 00:16:00 crc kubenswrapper[5121]: I0126 00:16:00.943737 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.257421 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.644141 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/3.log" Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.644233 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerStarted","Data":"78b4143a32f5b93c26ca39b0bd6d67984d39a0daf6049d5d480d1e74718e6d2f"} Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.644679 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.646116 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-926kg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 00:16:01 crc kubenswrapper[5121]: I0126 00:16:01.646237 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 00:16:02 crc kubenswrapper[5121]: I0126 00:16:02.018555 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:16:02 crc kubenswrapper[5121]: I0126 00:16:02.361320 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 26 00:16:02 crc kubenswrapper[5121]: I0126 00:16:02.666828 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:16:05 crc kubenswrapper[5121]: I0126 00:16:05.286559 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 26 00:16:07 crc kubenswrapper[5121]: I0126 00:16:07.300351 5121 ???:1] "http: TLS handshake error from 192.168.126.11:50562: no serving certificate available for the kubelet" Jan 26 00:16:21 crc kubenswrapper[5121]: I0126 00:16:21.650047 5121 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:16:32 crc kubenswrapper[5121]: I0126 00:16:32.495258 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:16:32 crc kubenswrapper[5121]: I0126 00:16:32.496372 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" podUID="113beae1-0be4-4a89-8f92-e0af868e7708" containerName="controller-manager" containerID="cri-o://bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e" gracePeriod=30 Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.287529 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.331840 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb"] Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.333244 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="113beae1-0be4-4a89-8f92-e0af868e7708" containerName="controller-manager" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.333423 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="113beae1-0be4-4a89-8f92-e0af868e7708" containerName="controller-manager" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.333686 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="113beae1-0be4-4a89-8f92-e0af868e7708" containerName="controller-manager" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.350869 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb"] Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.351154 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.366068 5121 generic.go:358] "Generic (PLEG): container finished" podID="113beae1-0be4-4a89-8f92-e0af868e7708" containerID="bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e" exitCode=0 Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.366357 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.366689 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" event={"ID":"113beae1-0be4-4a89-8f92-e0af868e7708","Type":"ContainerDied","Data":"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e"} Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.366751 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d75c6446b-7dwng" event={"ID":"113beae1-0be4-4a89-8f92-e0af868e7708","Type":"ContainerDied","Data":"5ad9049a9f2356ca708379d7180ea3831af45017d373de21d5446d496d9b59da"} Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.366804 5121 scope.go:117] "RemoveContainer" containerID="bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.407934 5121 scope.go:117] "RemoveContainer" containerID="bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e" Jan 26 00:16:33 crc kubenswrapper[5121]: E0126 00:16:33.408883 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e\": container with ID starting with bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e not found: ID does not exist" containerID="bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.408950 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e"} err="failed to get container status \"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e\": rpc error: code = NotFound desc = could not find container \"bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e\": container with ID starting with bdb92e47e25b0cb360a8a97e8580efa892b42b58e809ee249e0041a42137967e not found: ID does not exist" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468081 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468304 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468394 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468481 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.468959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrzp\" (UniqueName: \"kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp\") pod \"113beae1-0be4-4a89-8f92-e0af868e7708\" (UID: \"113beae1-0be4-4a89-8f92-e0af868e7708\") " Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469232 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469288 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-client-ca\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469338 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp" (OuterVolumeSpecName: "tmp") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469386 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw94h\" (UniqueName: \"kubernetes.io/projected/6d259337-ec4f-4194-92f2-11d228c4385f-kube-api-access-fw94h\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469688 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d259337-ec4f-4194-92f2-11d228c4385f-tmp\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469852 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca" (OuterVolumeSpecName: "client-ca") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.469928 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-config\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.470032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d259337-ec4f-4194-92f2-11d228c4385f-serving-cert\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.470064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.470176 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.470198 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/113beae1-0be4-4a89-8f92-e0af868e7708-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.470609 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config" (OuterVolumeSpecName: "config") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.476444 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.477718 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp" (OuterVolumeSpecName: "kube-api-access-tgrzp") pod "113beae1-0be4-4a89-8f92-e0af868e7708" (UID: "113beae1-0be4-4a89-8f92-e0af868e7708"). InnerVolumeSpecName "kube-api-access-tgrzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.571783 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fw94h\" (UniqueName: \"kubernetes.io/projected/6d259337-ec4f-4194-92f2-11d228c4385f-kube-api-access-fw94h\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.571902 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d259337-ec4f-4194-92f2-11d228c4385f-tmp\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.571980 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-config\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572015 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d259337-ec4f-4194-92f2-11d228c4385f-serving-cert\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572041 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572072 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-client-ca\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572163 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572179 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/113beae1-0be4-4a89-8f92-e0af868e7708-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572192 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113beae1-0be4-4a89-8f92-e0af868e7708-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.572203 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgrzp\" (UniqueName: \"kubernetes.io/projected/113beae1-0be4-4a89-8f92-e0af868e7708-kube-api-access-tgrzp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.573587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-client-ca\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.574818 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-config\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.575603 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6d259337-ec4f-4194-92f2-11d228c4385f-tmp\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.576923 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d259337-ec4f-4194-92f2-11d228c4385f-proxy-ca-bundles\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.581850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d259337-ec4f-4194-92f2-11d228c4385f-serving-cert\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.603612 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw94h\" (UniqueName: \"kubernetes.io/projected/6d259337-ec4f-4194-92f2-11d228c4385f-kube-api-access-fw94h\") pod \"controller-manager-7bb4f97b4f-px5mb\" (UID: \"6d259337-ec4f-4194-92f2-11d228c4385f\") " pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.673175 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.706551 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:16:33 crc kubenswrapper[5121]: I0126 00:16:33.712750 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d75c6446b-7dwng"] Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.128363 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb"] Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.265050 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113beae1-0be4-4a89-8f92-e0af868e7708" path="/var/lib/kubelet/pods/113beae1-0be4-4a89-8f92-e0af868e7708/volumes" Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.375818 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" event={"ID":"6d259337-ec4f-4194-92f2-11d228c4385f","Type":"ContainerStarted","Data":"ba0e39569c4849fb03fe7817e0e032d62736a460216c2d7b36c544bd6a676d56"} Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.376143 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" event={"ID":"6d259337-ec4f-4194-92f2-11d228c4385f","Type":"ContainerStarted","Data":"3c8722c44b60419d8185736441a279e1fa02fa85577548426e71996e377b7113"} Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.376292 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.377986 5121 patch_prober.go:28] interesting pod/controller-manager-7bb4f97b4f-px5mb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.378162 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" podUID="6d259337-ec4f-4194-92f2-11d228c4385f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 26 00:16:34 crc kubenswrapper[5121]: I0126 00:16:34.409882 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" podStartSLOduration=2.409830216 podStartE2EDuration="2.409830216s" podCreationTimestamp="2026-01-26 00:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:34.398270827 +0000 UTC m=+425.557471952" watchObservedRunningTime="2026-01-26 00:16:34.409830216 +0000 UTC m=+425.569031341" Jan 26 00:16:35 crc kubenswrapper[5121]: I0126 00:16:35.391074 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bb4f97b4f-px5mb" Jan 26 00:16:48 crc kubenswrapper[5121]: I0126 00:16:48.292688 5121 ???:1] "http: TLS handshake error from 192.168.126.11:45060: no serving certificate available for the kubelet" Jan 26 00:16:52 crc kubenswrapper[5121]: I0126 00:16:52.504218 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:16:52 crc kubenswrapper[5121]: I0126 00:16:52.505317 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" podUID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" containerName="route-controller-manager" containerID="cri-o://8088bba372bcf9ca9a83c26ec0328bc7d430d564677f6ce56c19ed7267962a93" gracePeriod=30 Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.535497 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" containerID="8088bba372bcf9ca9a83c26ec0328bc7d430d564677f6ce56c19ed7267962a93" exitCode=0 Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.535883 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" event={"ID":"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c","Type":"ContainerDied","Data":"8088bba372bcf9ca9a83c26ec0328bc7d430d564677f6ce56c19ed7267962a93"} Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.742904 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.782885 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42"] Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785231 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca\") pod \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785315 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshw4\" (UniqueName: \"kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4\") pod \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785448 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config\") pod \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785507 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp\") pod \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785531 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert\") pod \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\" (UID: \"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c\") " Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785573 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" containerName="route-controller-manager" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785604 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" containerName="route-controller-manager" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.785903 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" containerName="route-controller-manager" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.786611 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp" (OuterVolumeSpecName: "tmp") pod "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" (UID: "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.787066 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" (UID: "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.787197 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config" (OuterVolumeSpecName: "config") pod "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" (UID: "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.798197 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4" (OuterVolumeSpecName: "kube-api-access-tshw4") pod "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" (UID: "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c"). InnerVolumeSpecName "kube-api-access-tshw4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.814650 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" (UID: "4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.887977 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.888026 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tshw4\" (UniqueName: \"kubernetes.io/projected/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-kube-api-access-tshw4\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.888040 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.888054 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:53 crc kubenswrapper[5121]: I0126 00:16:53.888063 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.003138 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42"] Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.003424 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.091164 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-serving-cert\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.091247 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gbz\" (UniqueName: \"kubernetes.io/projected/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-kube-api-access-z8gbz\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.091286 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-config\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.091308 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-tmp\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.091336 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-client-ca\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.358385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-serving-cert\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.360229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z8gbz\" (UniqueName: \"kubernetes.io/projected/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-kube-api-access-z8gbz\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.360323 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-config\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.360353 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-tmp\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.360483 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-client-ca\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.381126 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-serving-cert\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.381953 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-tmp\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.382128 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-config\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.391105 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-client-ca\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.398899 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8gbz\" (UniqueName: \"kubernetes.io/projected/ddacdf5d-4f53-4913-bab7-4d2c76863f5e-kube-api-access-z8gbz\") pod \"route-controller-manager-ccc77b589-l9k42\" (UID: \"ddacdf5d-4f53-4913-bab7-4d2c76863f5e\") " pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.559625 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" event={"ID":"4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c","Type":"ContainerDied","Data":"75aedf5ce48e6109d566ea1b77c7051f9d56b06fd5814d5afbb996de2351684a"} Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.559710 5121 scope.go:117] "RemoveContainer" containerID="8088bba372bcf9ca9a83c26ec0328bc7d430d564677f6ce56c19ed7267962a93" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.559975 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg" Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.614041 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.625660 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749f5d557b-27svg"] Jan 26 00:16:54 crc kubenswrapper[5121]: I0126 00:16:54.660173 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:55 crc kubenswrapper[5121]: I0126 00:16:55.542744 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42"] Jan 26 00:16:55 crc kubenswrapper[5121]: W0126 00:16:55.559043 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddacdf5d_4f53_4913_bab7_4d2c76863f5e.slice/crio-bd81bd7321a2a7bead92bf4b4423459735da3e3f51cb5751347f8fb277ecc234 WatchSource:0}: Error finding container bd81bd7321a2a7bead92bf4b4423459735da3e3f51cb5751347f8fb277ecc234: Status 404 returned error can't find the container with id bd81bd7321a2a7bead92bf4b4423459735da3e3f51cb5751347f8fb277ecc234 Jan 26 00:16:55 crc kubenswrapper[5121]: I0126 00:16:55.573475 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" event={"ID":"ddacdf5d-4f53-4913-bab7-4d2c76863f5e","Type":"ContainerStarted","Data":"bd81bd7321a2a7bead92bf4b4423459735da3e3f51cb5751347f8fb277ecc234"} Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.289629 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c" path="/var/lib/kubelet/pods/4c2f0ce8-435f-485d-8ace-3ff3c1e23d0c/volumes" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.373086 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.373431 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfhxk" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="registry-server" containerID="cri-o://13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" gracePeriod=30 Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.391804 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.392099 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4p4cc" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="registry-server" containerID="cri-o://6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" gracePeriod=30 Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.403842 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.404254 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" containerID="cri-o://78b4143a32f5b93c26ca39b0bd6d67984d39a0daf6049d5d480d1e74718e6d2f" gracePeriod=30 Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.409913 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.410239 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m7rfv" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="registry-server" containerID="cri-o://c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" gracePeriod=30 Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.434372 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.434717 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hrfn9" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" containerID="cri-o://1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" gracePeriod=30 Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.440338 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-th95t"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.719065 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-th95t"] Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.719163 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.719197 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" event={"ID":"ddacdf5d-4f53-4913-bab7-4d2c76863f5e","Type":"ContainerStarted","Data":"cbca7a0767959a0dc22ca5f2a551eb2af64cb4c7f8da2ebbf876804051dcb7ce"} Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.720280 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.767374 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" podStartSLOduration=4.767350297 podStartE2EDuration="4.767350297s" podCreationTimestamp="2026-01-26 00:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:16:56.763005633 +0000 UTC m=+447.922206778" watchObservedRunningTime="2026-01-26 00:16:56.767350297 +0000 UTC m=+447.926551422" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.783798 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.783887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4br\" (UniqueName: \"kubernetes.io/projected/56a05c39-385c-44b8-be51-7d5c3df9540d-kube-api-access-hb4br\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.783922 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/56a05c39-385c-44b8-be51-7d5c3df9540d-tmp\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.784265 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.793194 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18 is running failed: container process not found" containerID="6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.793990 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18 is running failed: container process not found" containerID="6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.794838 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18 is running failed: container process not found" containerID="6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.794933 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4p4cc" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="registry-server" probeResult="unknown" Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.801985 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910 is running failed: container process not found" containerID="c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.802540 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910 is running failed: container process not found" containerID="c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.802966 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910 is running failed: container process not found" containerID="c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.803068 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-m7rfv" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="registry-server" probeResult="unknown" Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.807347 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42 is running failed: container process not found" containerID="13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.807925 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42 is running failed: container process not found" containerID="13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.833388 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42 is running failed: container process not found" containerID="13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:56 crc kubenswrapper[5121]: E0126 00:16:56.833504 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-dfhxk" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="registry-server" probeResult="unknown" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.887753 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.886191 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.887889 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4br\" (UniqueName: \"kubernetes.io/projected/56a05c39-385c-44b8-be51-7d5c3df9540d-kube-api-access-hb4br\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.887928 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/56a05c39-385c-44b8-be51-7d5c3df9540d-tmp\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.887991 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.889314 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/56a05c39-385c-44b8-be51-7d5c3df9540d-tmp\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.896230 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56a05c39-385c-44b8-be51-7d5c3df9540d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.911532 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4br\" (UniqueName: \"kubernetes.io/projected/56a05c39-385c-44b8-be51-7d5c3df9540d-kube-api-access-hb4br\") pod \"marketplace-operator-547dbd544d-th95t\" (UID: \"56a05c39-385c-44b8-be51-7d5c3df9540d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:56 crc kubenswrapper[5121]: I0126 00:16:56.950348 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:16:57 crc kubenswrapper[5121]: E0126 00:16:57.194570 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:57 crc kubenswrapper[5121]: E0126 00:16:57.196749 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:57 crc kubenswrapper[5121]: E0126 00:16:57.201951 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:16:57 crc kubenswrapper[5121]: E0126 00:16:57.202045 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-hrfn9" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" probeResult="unknown" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.579844 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ccc77b589-l9k42" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.594406 5121 generic.go:358] "Generic (PLEG): container finished" podID="3225226b-6f86-4163-b401-b9136c86dfed" containerID="c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" exitCode=0 Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.594506 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerDied","Data":"c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910"} Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.617316 5121 generic.go:358] "Generic (PLEG): container finished" podID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerID="13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" exitCode=0 Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.617563 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerDied","Data":"13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42"} Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.626097 5121 generic.go:358] "Generic (PLEG): container finished" podID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerID="6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" exitCode=0 Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.626349 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerDied","Data":"6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18"} Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.634241 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-926kg_4c75b2fc-a93e-44bd-9070-7512402f3f71/marketplace-operator/3.log" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.634320 5121 generic.go:358] "Generic (PLEG): container finished" podID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerID="78b4143a32f5b93c26ca39b0bd6d67984d39a0daf6049d5d480d1e74718e6d2f" exitCode=0 Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.634401 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"78b4143a32f5b93c26ca39b0bd6d67984d39a0daf6049d5d480d1e74718e6d2f"} Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.634499 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.695898 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.803444 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll4nk\" (UniqueName: \"kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk\") pod \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.803517 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities\") pod \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.803548 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content\") pod \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\" (UID: \"395eb036-2c83-4393-b3a7-d6b872cf9e4b\") " Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.804979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities" (OuterVolumeSpecName: "utilities") pod "395eb036-2c83-4393-b3a7-d6b872cf9e4b" (UID: "395eb036-2c83-4393-b3a7-d6b872cf9e4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.812978 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk" (OuterVolumeSpecName: "kube-api-access-ll4nk") pod "395eb036-2c83-4393-b3a7-d6b872cf9e4b" (UID: "395eb036-2c83-4393-b3a7-d6b872cf9e4b"). InnerVolumeSpecName "kube-api-access-ll4nk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.867431 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "395eb036-2c83-4393-b3a7-d6b872cf9e4b" (UID: "395eb036-2c83-4393-b3a7-d6b872cf9e4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.905927 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ll4nk\" (UniqueName: \"kubernetes.io/projected/395eb036-2c83-4393-b3a7-d6b872cf9e4b-kube-api-access-ll4nk\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.906395 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.906406 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395eb036-2c83-4393-b3a7-d6b872cf9e4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.975511 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.989560 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:16:57 crc kubenswrapper[5121]: I0126 00:16:57.992589 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008417 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content\") pod \"c51b5df5-ef7d-4d88-b10c-1321140728e8\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008528 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities\") pod \"3225226b-6f86-4163-b401-b9136c86dfed\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008562 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp\") pod \"4c75b2fc-a93e-44bd-9070-7512402f3f71\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008598 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics\") pod \"4c75b2fc-a93e-44bd-9070-7512402f3f71\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca\") pod \"4c75b2fc-a93e-44bd-9070-7512402f3f71\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008718 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content\") pod \"3225226b-6f86-4163-b401-b9136c86dfed\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008788 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities\") pod \"c51b5df5-ef7d-4d88-b10c-1321140728e8\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008817 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rhv9\" (UniqueName: \"kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9\") pod \"3225226b-6f86-4163-b401-b9136c86dfed\" (UID: \"3225226b-6f86-4163-b401-b9136c86dfed\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008847 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tff7q\" (UniqueName: \"kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q\") pod \"c51b5df5-ef7d-4d88-b10c-1321140728e8\" (UID: \"c51b5df5-ef7d-4d88-b10c-1321140728e8\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.008927 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz5f5\" (UniqueName: \"kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5\") pod \"4c75b2fc-a93e-44bd-9070-7512402f3f71\" (UID: \"4c75b2fc-a93e-44bd-9070-7512402f3f71\") " Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.010230 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4c75b2fc-a93e-44bd-9070-7512402f3f71" (UID: "4c75b2fc-a93e-44bd-9070-7512402f3f71"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.011268 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities" (OuterVolumeSpecName: "utilities") pod "3225226b-6f86-4163-b401-b9136c86dfed" (UID: "3225226b-6f86-4163-b401-b9136c86dfed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.011708 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities" (OuterVolumeSpecName: "utilities") pod "c51b5df5-ef7d-4d88-b10c-1321140728e8" (UID: "c51b5df5-ef7d-4d88-b10c-1321140728e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.012333 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp" (OuterVolumeSpecName: "tmp") pod "4c75b2fc-a93e-44bd-9070-7512402f3f71" (UID: "4c75b2fc-a93e-44bd-9070-7512402f3f71"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.015679 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q" (OuterVolumeSpecName: "kube-api-access-tff7q") pod "c51b5df5-ef7d-4d88-b10c-1321140728e8" (UID: "c51b5df5-ef7d-4d88-b10c-1321140728e8"). InnerVolumeSpecName "kube-api-access-tff7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.025152 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4c75b2fc-a93e-44bd-9070-7512402f3f71" (UID: "4c75b2fc-a93e-44bd-9070-7512402f3f71"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.032178 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9" (OuterVolumeSpecName: "kube-api-access-6rhv9") pod "3225226b-6f86-4163-b401-b9136c86dfed" (UID: "3225226b-6f86-4163-b401-b9136c86dfed"). InnerVolumeSpecName "kube-api-access-6rhv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.044498 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5" (OuterVolumeSpecName: "kube-api-access-wz5f5") pod "4c75b2fc-a93e-44bd-9070-7512402f3f71" (UID: "4c75b2fc-a93e-44bd-9070-7512402f3f71"). InnerVolumeSpecName "kube-api-access-wz5f5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.049421 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3225226b-6f86-4163-b401-b9136c86dfed" (UID: "3225226b-6f86-4163-b401-b9136c86dfed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.079187 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c51b5df5-ef7d-4d88-b10c-1321140728e8" (UID: "c51b5df5-ef7d-4d88-b10c-1321140728e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.079279 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-th95t"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111437 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111484 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111498 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c75b2fc-a93e-44bd-9070-7512402f3f71-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111516 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111533 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c75b2fc-a93e-44bd-9070-7512402f3f71-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111547 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3225226b-6f86-4163-b401-b9136c86dfed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111558 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c51b5df5-ef7d-4d88-b10c-1321140728e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111570 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rhv9\" (UniqueName: \"kubernetes.io/projected/3225226b-6f86-4163-b401-b9136c86dfed-kube-api-access-6rhv9\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111582 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tff7q\" (UniqueName: \"kubernetes.io/projected/c51b5df5-ef7d-4d88-b10c-1321140728e8-kube-api-access-tff7q\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.111594 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wz5f5\" (UniqueName: \"kubernetes.io/projected/4c75b2fc-a93e-44bd-9070-7512402f3f71-kube-api-access-wz5f5\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.642691 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.642678 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-926kg" event={"ID":"4c75b2fc-a93e-44bd-9070-7512402f3f71","Type":"ContainerDied","Data":"26e3af89546a142e1d0ff614db98d012cdbd2f1bcd7dc317136a897852a1ff7e"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.644934 5121 generic.go:358] "Generic (PLEG): container finished" podID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerID="1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" exitCode=0 Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.645031 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerDied","Data":"1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.647313 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7rfv" event={"ID":"3225226b-6f86-4163-b401-b9136c86dfed","Type":"ContainerDied","Data":"e580034c525da8cf3be61fdcaa055df120a5d569f4a200ec289e2c7fdbd9004d"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.647365 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7rfv" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.650282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfhxk" event={"ID":"c51b5df5-ef7d-4d88-b10c-1321140728e8","Type":"ContainerDied","Data":"5b89110e15cccd8ac628e5ffec8826f92945053c21ac8617ca3f3d479d51b659"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.650289 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfhxk" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.653163 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4p4cc" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.653234 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4p4cc" event={"ID":"395eb036-2c83-4393-b3a7-d6b872cf9e4b","Type":"ContainerDied","Data":"4a2247ecaaf87664a9bd1e8dcd913fc60f73cb13bd5fe86e48d1de509eb52d5b"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.654422 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" event={"ID":"56a05c39-385c-44b8-be51-7d5c3df9540d","Type":"ContainerStarted","Data":"435b18733496e4f9672d8b6115c29e3115022259979a13376c4bf93f60c770cc"} Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.682017 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.687952 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-926kg"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.832866 5121 scope.go:117] "RemoveContainer" containerID="78b4143a32f5b93c26ca39b0bd6d67984d39a0daf6049d5d480d1e74718e6d2f" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.851709 5121 scope.go:117] "RemoveContainer" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:16:58 crc kubenswrapper[5121]: E0126 00:16:58.852532 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12\": container with ID starting with 03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12 not found: ID does not exist" containerID="03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.852575 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12"} err="failed to get container status \"03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12\": rpc error: code = NotFound desc = could not find container \"03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12\": container with ID starting with 03dfb1de99a4356b7d5c412d870aa3c262f1970ed44be4fb95d6c3cefef63c12 not found: ID does not exist" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.852600 5121 scope.go:117] "RemoveContainer" containerID="c6b713ecffbcaab906943b69d1b724f394716acc0d10430f82218e9045856910" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.870993 5121 scope.go:117] "RemoveContainer" containerID="54d2d87588d6bbe3dbfb327089ce9d4864fa7a3aba978ba6c260573368e5f588" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.889135 5121 scope.go:117] "RemoveContainer" containerID="c87f3dff942f3d615f2c61c40b29c39e40ea20bb9a0cb40f150dee26c4759ddc" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.918020 5121 scope.go:117] "RemoveContainer" containerID="13a729ed9fff5e59e0c299c0c951c6e5dd867f1911ec36cd186697f89201db42" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.950053 5121 scope.go:117] "RemoveContainer" containerID="a2bb7aa9fb375c1a28160fd94ddcd56bbb6dca83c9a3ea4f3648a5d7ecf90b93" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.967087 5121 scope.go:117] "RemoveContainer" containerID="3426ed528a702b5ef7b7100a4209abb4092c082cd45cefcecc0cde8db6221218" Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.985546 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.985649 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfhxk"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.986395 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.986426 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7rfv"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.986482 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.986496 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4p4cc"] Jan 26 00:16:58 crc kubenswrapper[5121]: I0126 00:16:58.987413 5121 scope.go:117] "RemoveContainer" containerID="6b21b59ef6a6e8e4c7a42482ce554a89b8318b33f5193cc2956203480b870a18" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.007699 5121 scope.go:117] "RemoveContainer" containerID="8ea8c223a4290c8ed503e17f32140c615edd09215fbf77dc505f882c58fe44fd" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.045400 5121 scope.go:117] "RemoveContainer" containerID="892c425b55d5562b5dd6102cc77f438e4d0b4d15f1570fa4a50a20af338d56c3" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.371844 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.431425 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck79l\" (UniqueName: \"kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l\") pod \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.431494 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities\") pod \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.431534 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content\") pod \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\" (UID: \"1a9d6686-1ae2-48c4-91f2-a41a12de699f\") " Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.433634 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities" (OuterVolumeSpecName: "utilities") pod "1a9d6686-1ae2-48c4-91f2-a41a12de699f" (UID: "1a9d6686-1ae2-48c4-91f2-a41a12de699f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.438821 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l" (OuterVolumeSpecName: "kube-api-access-ck79l") pod "1a9d6686-1ae2-48c4-91f2-a41a12de699f" (UID: "1a9d6686-1ae2-48c4-91f2-a41a12de699f"). InnerVolumeSpecName "kube-api-access-ck79l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.532614 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ck79l\" (UniqueName: \"kubernetes.io/projected/1a9d6686-1ae2-48c4-91f2-a41a12de699f-kube-api-access-ck79l\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.532651 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.597266 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9d6686-1ae2-48c4-91f2-a41a12de699f" (UID: "1a9d6686-1ae2-48c4-91f2-a41a12de699f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.634098 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9d6686-1ae2-48c4-91f2-a41a12de699f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.664575 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrfn9" event={"ID":"1a9d6686-1ae2-48c4-91f2-a41a12de699f","Type":"ContainerDied","Data":"8e0ba689fca9fae08775016845af9af4b3d02fa15d8f9f1673b93c743148093b"} Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.664634 5121 scope.go:117] "RemoveContainer" containerID="1a3648a51232d1138077d37446c5343add4e8cc6f8301d89ef49abd7236c186c" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.664680 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrfn9" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.675140 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" event={"ID":"56a05c39-385c-44b8-be51-7d5c3df9540d","Type":"ContainerStarted","Data":"61765ec42247b08b2e665f28da9d59ea16fbc0d9fc34827ded5529c9523c2857"} Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.694795 5121 scope.go:117] "RemoveContainer" containerID="28edef9c6f363addfa4e733c5ad8f047e7aef75f1c91570c5e55bdeb24251c19" Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.714550 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.721135 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hrfn9"] Jan 26 00:16:59 crc kubenswrapper[5121]: I0126 00:16:59.725187 5121 scope.go:117] "RemoveContainer" containerID="87806c6f13a55b52d6912e779da4a98e0762c7fbf0dfb496f3311de82cdf4743" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.192243 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kq592"] Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193671 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193694 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193707 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193715 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193732 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193743 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193750 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193781 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193799 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193807 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193818 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193853 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193864 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193872 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193888 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193897 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193916 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193924 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193935 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193943 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193953 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193960 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193971 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193979 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193989 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.193997 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194006 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194014 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="extract-utilities" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194024 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194032 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194042 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194063 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="extract-content" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194183 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194206 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3225226b-6f86-4163-b401-b9136c86dfed" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194218 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194228 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194238 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194253 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194265 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" containerName="registry-server" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194436 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194448 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194563 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.194581 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" containerName="marketplace-operator" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.405753 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.408637 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.418407 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9d6686-1ae2-48c4-91f2-a41a12de699f" path="/var/lib/kubelet/pods/1a9d6686-1ae2-48c4-91f2-a41a12de699f/volumes" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.422316 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3225226b-6f86-4163-b401-b9136c86dfed" path="/var/lib/kubelet/pods/3225226b-6f86-4163-b401-b9136c86dfed/volumes" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.423400 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395eb036-2c83-4393-b3a7-d6b872cf9e4b" path="/var/lib/kubelet/pods/395eb036-2c83-4393-b3a7-d6b872cf9e4b/volumes" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.424267 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c75b2fc-a93e-44bd-9070-7512402f3f71" path="/var/lib/kubelet/pods/4c75b2fc-a93e-44bd-9070-7512402f3f71/volumes" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.425354 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51b5df5-ef7d-4d88-b10c-1321140728e8" path="/var/lib/kubelet/pods/c51b5df5-ef7d-4d88-b10c-1321140728e8/volumes" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.426174 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq592"] Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.445029 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlx5f\" (UniqueName: \"kubernetes.io/projected/35ee43d8-f119-418d-8f93-682a4ac716f4-kube-api-access-tlx5f\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.445140 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-catalog-content\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.445298 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-utilities\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.561442 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-catalog-content\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.562089 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-utilities\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.562747 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tlx5f\" (UniqueName: \"kubernetes.io/projected/35ee43d8-f119-418d-8f93-682a4ac716f4-kube-api-access-tlx5f\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.563015 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-catalog-content\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.565430 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ee43d8-f119-418d-8f93-682a4ac716f4-utilities\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.592683 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlx5f\" (UniqueName: \"kubernetes.io/projected/35ee43d8-f119-418d-8f93-682a4ac716f4-kube-api-access-tlx5f\") pod \"certified-operators-kq592\" (UID: \"35ee43d8-f119-418d-8f93-682a4ac716f4\") " pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.684635 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.690586 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.727937 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-th95t" podStartSLOduration=4.727890118 podStartE2EDuration="4.727890118s" podCreationTimestamp="2026-01-26 00:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:17:00.704929175 +0000 UTC m=+451.864130300" watchObservedRunningTime="2026-01-26 00:17:00.727890118 +0000 UTC m=+451.887091243" Jan 26 00:17:00 crc kubenswrapper[5121]: I0126 00:17:00.736634 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.190350 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2txw7"] Jan 26 00:17:01 crc kubenswrapper[5121]: W0126 00:17:01.214131 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ee43d8_f119_418d_8f93_682a4ac716f4.slice/crio-fa7af4d04d08829b987c12b88ab2ccc83f772492750f6df523d48751495cbc05 WatchSource:0}: Error finding container fa7af4d04d08829b987c12b88ab2ccc83f772492750f6df523d48751495cbc05: Status 404 returned error can't find the container with id fa7af4d04d08829b987c12b88ab2ccc83f772492750f6df523d48751495cbc05 Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.605359 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kq592"] Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.605544 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.607437 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2txw7"] Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.608848 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.679339 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-catalog-content\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.679804 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-utilities\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.680038 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcw9\" (UniqueName: \"kubernetes.io/projected/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-kube-api-access-czcw9\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.692017 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq592" event={"ID":"35ee43d8-f119-418d-8f93-682a4ac716f4","Type":"ContainerStarted","Data":"fa7af4d04d08829b987c12b88ab2ccc83f772492750f6df523d48751495cbc05"} Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.781705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-catalog-content\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.781988 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-utilities\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.782128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czcw9\" (UniqueName: \"kubernetes.io/projected/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-kube-api-access-czcw9\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.782253 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-catalog-content\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.782823 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-utilities\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.802575 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.802663 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.803851 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcw9\" (UniqueName: \"kubernetes.io/projected/e2dce66a-3bc6-4888-b054-5d06e1c1bef0-kube-api-access-czcw9\") pod \"community-operators-2txw7\" (UID: \"e2dce66a-3bc6-4888-b054-5d06e1c1bef0\") " pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:01 crc kubenswrapper[5121]: I0126 00:17:01.927238 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:02 crc kubenswrapper[5121]: I0126 00:17:02.388342 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2txw7"] Jan 26 00:17:02 crc kubenswrapper[5121]: I0126 00:17:02.592092 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:17:02 crc kubenswrapper[5121]: I0126 00:17:02.704172 5121 generic.go:358] "Generic (PLEG): container finished" podID="35ee43d8-f119-418d-8f93-682a4ac716f4" containerID="136ad2db187b0002899a223f658f60631449796c9175339afe0274ce7bcdbb31" exitCode=0 Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.338362 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.338409 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-t2xjv"] Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.341048 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.343776 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.414870 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.415024 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfkx\" (UniqueName: \"kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.415121 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.516885 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.516972 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.517013 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfkx\" (UniqueName: \"kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.517339 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.517796 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.549965 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfkx\" (UniqueName: \"kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx\") pod \"redhat-marketplace-9gd98\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:03 crc kubenswrapper[5121]: I0126 00:17:03.674458 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.204990 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq592" event={"ID":"35ee43d8-f119-418d-8f93-682a4ac716f4","Type":"ContainerDied","Data":"136ad2db187b0002899a223f658f60631449796c9175339afe0274ce7bcdbb31"} Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.205370 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-t2xjv"] Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.205394 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2txw7" event={"ID":"e2dce66a-3bc6-4888-b054-5d06e1c1bef0","Type":"ContainerStarted","Data":"e7c1f151310dcd5bda51617aea66dd47c61c2b7d2ea692a8cd4abd4d093b4aa8"} Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.205421 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7zv49"] Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.205159 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329101 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzns\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-kube-api-access-sgzns\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329244 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-registry-tls\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329306 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329351 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f8e7115-5275-411b-b351-2f26a81c330e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329426 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-registry-certificates\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329453 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f8e7115-5275-411b-b351-2f26a81c330e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.329481 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-trusted-ca\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.355494 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.430872 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgzns\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-kube-api-access-sgzns\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.430962 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-registry-tls\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431000 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431026 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f8e7115-5275-411b-b351-2f26a81c330e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431062 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-registry-certificates\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f8e7115-5275-411b-b351-2f26a81c330e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431501 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-trusted-ca\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.431983 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f8e7115-5275-411b-b351-2f26a81c330e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.432971 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-registry-certificates\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.433022 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f8e7115-5275-411b-b351-2f26a81c330e-trusted-ca\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.445082 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-registry-tls\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.446018 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f8e7115-5275-411b-b351-2f26a81c330e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.451149 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.452938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgzns\" (UniqueName: \"kubernetes.io/projected/0f8e7115-5275-411b-b351-2f26a81c330e-kube-api-access-sgzns\") pod \"image-registry-5d9d95bf5b-t2xjv\" (UID: \"0f8e7115-5275-411b-b351-2f26a81c330e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: W0126 00:17:04.458198 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod242b1a88_f692_4c26_96bc_ee700a89fd4c.slice/crio-2d12ebc0ce05a4d7bc048b7115e4c1fac791a05a465c24b290519a43663cb448 WatchSource:0}: Error finding container 2d12ebc0ce05a4d7bc048b7115e4c1fac791a05a465c24b290519a43663cb448: Status 404 returned error can't find the container with id 2d12ebc0ce05a4d7bc048b7115e4c1fac791a05a465c24b290519a43663cb448 Jan 26 00:17:04 crc kubenswrapper[5121]: I0126 00:17:04.524645 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:04 crc kubenswrapper[5121]: W0126 00:17:04.971670 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f8e7115_5275_411b_b351_2f26a81c330e.slice/crio-c1bb01e0762d77f067e909b73b8f6fbb4fcfa9f23ef5351ad944761e22ee6931 WatchSource:0}: Error finding container c1bb01e0762d77f067e909b73b8f6fbb4fcfa9f23ef5351ad944761e22ee6931: Status 404 returned error can't find the container with id c1bb01e0762d77f067e909b73b8f6fbb4fcfa9f23ef5351ad944761e22ee6931 Jan 26 00:17:07 crc kubenswrapper[5121]: I0126 00:17:07.759664 5121 generic.go:358] "Generic (PLEG): container finished" podID="e2dce66a-3bc6-4888-b054-5d06e1c1bef0" containerID="cea74d0b8da6a42dafc869f58bc2a8d959d6745adcc42ec34a861f49f9a586bc" exitCode=0 Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.981199 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zv49"] Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.981588 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.981651 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.981725 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-t2xjv"] Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.981964 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerStarted","Data":"2d12ebc0ce05a4d7bc048b7115e4c1fac791a05a465c24b290519a43663cb448"} Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.985723 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.995057 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" event={"ID":"0f8e7115-5275-411b-b351-2f26a81c330e","Type":"ContainerStarted","Data":"c1bb01e0762d77f067e909b73b8f6fbb4fcfa9f23ef5351ad944761e22ee6931"} Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.995158 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2txw7" event={"ID":"e2dce66a-3bc6-4888-b054-5d06e1c1bef0","Type":"ContainerStarted","Data":"cea74d0b8da6a42dafc869f58bc2a8d959d6745adcc42ec34a861f49f9a586bc"} Jan 26 00:17:09 crc kubenswrapper[5121]: I0126 00:17:09.995180 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2txw7" event={"ID":"e2dce66a-3bc6-4888-b054-5d06e1c1bef0","Type":"ContainerDied","Data":"cea74d0b8da6a42dafc869f58bc2a8d959d6745adcc42ec34a861f49f9a586bc"} Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.136205 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5d8\" (UniqueName: \"kubernetes.io/projected/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-kube-api-access-bq5d8\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.136394 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-catalog-content\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.136454 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-utilities\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.237702 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-catalog-content\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.237834 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-utilities\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.237891 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5d8\" (UniqueName: \"kubernetes.io/projected/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-kube-api-access-bq5d8\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.238505 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-catalog-content\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.238888 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-utilities\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.269508 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5d8\" (UniqueName: \"kubernetes.io/projected/eb37ecc8-c468-4f1d-88b6-3b1fa517ed70-kube-api-access-bq5d8\") pod \"redhat-operators-7zv49\" (UID: \"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70\") " pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.319325 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.788165 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zv49"] Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.790346 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" event={"ID":"0f8e7115-5275-411b-b351-2f26a81c330e","Type":"ContainerStarted","Data":"19caf3186c15ba6e4faf6272e1e01c5f70e9e67d27a3fb20c31e679293e9b83f"} Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.790463 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.792572 5121 generic.go:358] "Generic (PLEG): container finished" podID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerID="a1d42c5f0c55ab823f023b6c31b7379d0812ccb9c6c3d89d7b8f48339927e116" exitCode=0 Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.792666 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerDied","Data":"a1d42c5f0c55ab823f023b6c31b7379d0812ccb9c6c3d89d7b8f48339927e116"} Jan 26 00:17:10 crc kubenswrapper[5121]: W0126 00:17:10.798799 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb37ecc8_c468_4f1d_88b6_3b1fa517ed70.slice/crio-8509691ef9669202ad960b9d87ced415b7ca3d926ad2e4cc6038be8a10c94c16 WatchSource:0}: Error finding container 8509691ef9669202ad960b9d87ced415b7ca3d926ad2e4cc6038be8a10c94c16: Status 404 returned error can't find the container with id 8509691ef9669202ad960b9d87ced415b7ca3d926ad2e4cc6038be8a10c94c16 Jan 26 00:17:10 crc kubenswrapper[5121]: I0126 00:17:10.838263 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" podStartSLOduration=8.838226645 podStartE2EDuration="8.838226645s" podCreationTimestamp="2026-01-26 00:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:17:10.829641125 +0000 UTC m=+461.988842250" watchObservedRunningTime="2026-01-26 00:17:10.838226645 +0000 UTC m=+461.997427790" Jan 26 00:17:11 crc kubenswrapper[5121]: I0126 00:17:11.803178 5121 generic.go:358] "Generic (PLEG): container finished" podID="35ee43d8-f119-418d-8f93-682a4ac716f4" containerID="aa66a306d3f2a43b891f740282ad8ffec51e85e5349ee659c93b1931b4776764" exitCode=0 Jan 26 00:17:11 crc kubenswrapper[5121]: I0126 00:17:11.803444 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq592" event={"ID":"35ee43d8-f119-418d-8f93-682a4ac716f4","Type":"ContainerDied","Data":"aa66a306d3f2a43b891f740282ad8ffec51e85e5349ee659c93b1931b4776764"} Jan 26 00:17:11 crc kubenswrapper[5121]: I0126 00:17:11.809259 5121 generic.go:358] "Generic (PLEG): container finished" podID="eb37ecc8-c468-4f1d-88b6-3b1fa517ed70" containerID="21a1b1034e9b6be4d25f6a4bc9fd583031484c0a728a27212e30fb024c25d139" exitCode=0 Jan 26 00:17:11 crc kubenswrapper[5121]: I0126 00:17:11.809362 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zv49" event={"ID":"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70","Type":"ContainerDied","Data":"21a1b1034e9b6be4d25f6a4bc9fd583031484c0a728a27212e30fb024c25d139"} Jan 26 00:17:11 crc kubenswrapper[5121]: I0126 00:17:11.809505 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zv49" event={"ID":"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70","Type":"ContainerStarted","Data":"8509691ef9669202ad960b9d87ced415b7ca3d926ad2e4cc6038be8a10c94c16"} Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.818173 5121 generic.go:358] "Generic (PLEG): container finished" podID="e2dce66a-3bc6-4888-b054-5d06e1c1bef0" containerID="a25921ccfa679243e651ba6f034e3bde39ddccafdf56255f990fde61a8ea01ff" exitCode=0 Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.818280 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2txw7" event={"ID":"e2dce66a-3bc6-4888-b054-5d06e1c1bef0","Type":"ContainerDied","Data":"a25921ccfa679243e651ba6f034e3bde39ddccafdf56255f990fde61a8ea01ff"} Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.823791 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kq592" event={"ID":"35ee43d8-f119-418d-8f93-682a4ac716f4","Type":"ContainerStarted","Data":"3a84abf84e57a6f7014ab3d513c3c3494c921665229b589305adddc199d05f6b"} Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.827288 5121 generic.go:358] "Generic (PLEG): container finished" podID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerID="2b70057252c7c3ee99260923d0180f2152bb0c14b5f286db03d9161c652e04f1" exitCode=0 Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.827626 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerDied","Data":"2b70057252c7c3ee99260923d0180f2152bb0c14b5f286db03d9161c652e04f1"} Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.831832 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zv49" event={"ID":"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70","Type":"ContainerStarted","Data":"508d61dc725f2f48fe6b9105b6a6f1f5b4275adaec90a3aec65cbff868ae9f66"} Jan 26 00:17:12 crc kubenswrapper[5121]: I0126 00:17:12.989579 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kq592" podStartSLOduration=5.492319733 podStartE2EDuration="12.989496995s" podCreationTimestamp="2026-01-26 00:17:00 +0000 UTC" firstStartedPulling="2026-01-26 00:17:03.339336313 +0000 UTC m=+454.498537438" lastFinishedPulling="2026-01-26 00:17:10.836513575 +0000 UTC m=+461.995714700" observedRunningTime="2026-01-26 00:17:12.957152021 +0000 UTC m=+464.116353146" watchObservedRunningTime="2026-01-26 00:17:12.989496995 +0000 UTC m=+464.148698120" Jan 26 00:17:13 crc kubenswrapper[5121]: I0126 00:17:13.840614 5121 generic.go:358] "Generic (PLEG): container finished" podID="eb37ecc8-c468-4f1d-88b6-3b1fa517ed70" containerID="508d61dc725f2f48fe6b9105b6a6f1f5b4275adaec90a3aec65cbff868ae9f66" exitCode=0 Jan 26 00:17:13 crc kubenswrapper[5121]: I0126 00:17:13.841808 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zv49" event={"ID":"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70","Type":"ContainerDied","Data":"508d61dc725f2f48fe6b9105b6a6f1f5b4275adaec90a3aec65cbff868ae9f66"} Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.874650 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2txw7" event={"ID":"e2dce66a-3bc6-4888-b054-5d06e1c1bef0","Type":"ContainerStarted","Data":"e89a0f032f941a70e15d3234a7545843b6bdb8ac986d353a020debd8d4f65480"} Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.878544 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerStarted","Data":"ef653cc8d3e3dc8a5f00cf6a0348088213a80f9a2100d96686c3fcaa73e6a7f6"} Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.881664 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zv49" event={"ID":"eb37ecc8-c468-4f1d-88b6-3b1fa517ed70","Type":"ContainerStarted","Data":"ff77bde5aa0393685f6cf33801cc561d446b6528d0c534d99b0b4a6c725f32b9"} Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.904428 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2txw7" podStartSLOduration=12.756607889 podStartE2EDuration="13.904400909s" podCreationTimestamp="2026-01-26 00:17:01 +0000 UTC" firstStartedPulling="2026-01-26 00:17:10.793569742 +0000 UTC m=+461.952770867" lastFinishedPulling="2026-01-26 00:17:11.941362752 +0000 UTC m=+463.100563887" observedRunningTime="2026-01-26 00:17:14.901751291 +0000 UTC m=+466.060952416" watchObservedRunningTime="2026-01-26 00:17:14.904400909 +0000 UTC m=+466.063602034" Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.930214 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9gd98" podStartSLOduration=11.853274289 podStartE2EDuration="12.930192981s" podCreationTimestamp="2026-01-26 00:17:02 +0000 UTC" firstStartedPulling="2026-01-26 00:17:10.793886462 +0000 UTC m=+461.953087587" lastFinishedPulling="2026-01-26 00:17:11.870805154 +0000 UTC m=+463.030006279" observedRunningTime="2026-01-26 00:17:14.929906563 +0000 UTC m=+466.089107718" watchObservedRunningTime="2026-01-26 00:17:14.930192981 +0000 UTC m=+466.089394106" Jan 26 00:17:14 crc kubenswrapper[5121]: I0126 00:17:14.957147 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7zv49" podStartSLOduration=11.38607297 podStartE2EDuration="11.957108266s" podCreationTimestamp="2026-01-26 00:17:03 +0000 UTC" firstStartedPulling="2026-01-26 00:17:11.810533036 +0000 UTC m=+462.969734161" lastFinishedPulling="2026-01-26 00:17:12.381568322 +0000 UTC m=+463.540769457" observedRunningTime="2026-01-26 00:17:14.952695977 +0000 UTC m=+466.111897112" watchObservedRunningTime="2026-01-26 00:17:14.957108266 +0000 UTC m=+466.116309391" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.717263 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.718053 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.736986 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.737039 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.756576 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:20 crc kubenswrapper[5121]: I0126 00:17:20.789882 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:21 crc kubenswrapper[5121]: I0126 00:17:21.000633 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7zv49" Jan 26 00:17:21 crc kubenswrapper[5121]: I0126 00:17:21.016570 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kq592" Jan 26 00:17:21 crc kubenswrapper[5121]: I0126 00:17:21.927752 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:21 crc kubenswrapper[5121]: I0126 00:17:21.928193 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:21 crc kubenswrapper[5121]: I0126 00:17:21.999181 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:22 crc kubenswrapper[5121]: I0126 00:17:22.046618 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2txw7" Jan 26 00:17:23 crc kubenswrapper[5121]: I0126 00:17:23.674782 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:23 crc kubenswrapper[5121]: I0126 00:17:23.675860 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:23 crc kubenswrapper[5121]: I0126 00:17:23.730897 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:24 crc kubenswrapper[5121]: I0126 00:17:24.038880 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:17:31 crc kubenswrapper[5121]: I0126 00:17:31.802068 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:31 crc kubenswrapper[5121]: I0126 00:17:31.802880 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:31 crc kubenswrapper[5121]: I0126 00:17:31.817860 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-t2xjv" Jan 26 00:17:31 crc kubenswrapper[5121]: I0126 00:17:31.873322 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:17:56 crc kubenswrapper[5121]: I0126 00:17:56.940846 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" podUID="377fc649-7ccb-4b5e-a98c-f217298fd396" containerName="registry" containerID="cri-o://58139bbe20f1b373058e8f021f501637c2f0d2265da0c468bea4685257742841" gracePeriod=30 Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.226269 5121 generic.go:358] "Generic (PLEG): container finished" podID="377fc649-7ccb-4b5e-a98c-f217298fd396" containerID="58139bbe20f1b373058e8f021f501637c2f0d2265da0c468bea4685257742841" exitCode=0 Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.226412 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" event={"ID":"377fc649-7ccb-4b5e-a98c-f217298fd396","Type":"ContainerDied","Data":"58139bbe20f1b373058e8f021f501637c2f0d2265da0c468bea4685257742841"} Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.426359 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.561609 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthvx\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.561707 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.561727 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.561758 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.563678 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.561884 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.564193 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.564252 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.564480 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"377fc649-7ccb-4b5e-a98c-f217298fd396\" (UID: \"377fc649-7ccb-4b5e-a98c-f217298fd396\") " Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.564725 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.565414 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.571606 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.571708 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.572532 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.579114 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx" (OuterVolumeSpecName: "kube-api-access-tthvx") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "kube-api-access-tthvx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.582184 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.582501 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "377fc649-7ccb-4b5e-a98c-f217298fd396" (UID: "377fc649-7ccb-4b5e-a98c-f217298fd396"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666113 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/377fc649-7ccb-4b5e-a98c-f217298fd396-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666191 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tthvx\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-kube-api-access-tthvx\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666218 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666237 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/377fc649-7ccb-4b5e-a98c-f217298fd396-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666258 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/377fc649-7ccb-4b5e-a98c-f217298fd396-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:57 crc kubenswrapper[5121]: I0126 00:17:57.666317 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/377fc649-7ccb-4b5e-a98c-f217298fd396-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:17:58 crc kubenswrapper[5121]: I0126 00:17:58.233406 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" event={"ID":"377fc649-7ccb-4b5e-a98c-f217298fd396","Type":"ContainerDied","Data":"85adef2e4935b11730e3b5850afd882b0498ede0f3a85363bbc8983828b06714"} Jan 26 00:17:58 crc kubenswrapper[5121]: I0126 00:17:58.233477 5121 scope.go:117] "RemoveContainer" containerID="58139bbe20f1b373058e8f021f501637c2f0d2265da0c468bea4685257742841" Jan 26 00:17:58 crc kubenswrapper[5121]: I0126 00:17:58.233476 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-c2pks" Jan 26 00:17:58 crc kubenswrapper[5121]: I0126 00:17:58.282358 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:17:58 crc kubenswrapper[5121]: I0126 00:17:58.289016 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-c2pks"] Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.151010 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489778-fmfw6"] Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.152336 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="377fc649-7ccb-4b5e-a98c-f217298fd396" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.152360 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="377fc649-7ccb-4b5e-a98c-f217298fd396" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.152525 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="377fc649-7ccb-4b5e-a98c-f217298fd396" containerName="registry" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.165636 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.168699 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.169205 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.169542 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.172565 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-fmfw6"] Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.262985 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377fc649-7ccb-4b5e-a98c-f217298fd396" path="/var/lib/kubelet/pods/377fc649-7ccb-4b5e-a98c-f217298fd396/volumes" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.306103 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8pq\" (UniqueName: \"kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq\") pod \"auto-csr-approver-29489778-fmfw6\" (UID: \"cf51cfa0-3712-4ba5-9394-eb2d0af087b9\") " pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.407993 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8pq\" (UniqueName: \"kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq\") pod \"auto-csr-approver-29489778-fmfw6\" (UID: \"cf51cfa0-3712-4ba5-9394-eb2d0af087b9\") " pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.431308 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8pq\" (UniqueName: \"kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq\") pod \"auto-csr-approver-29489778-fmfw6\" (UID: \"cf51cfa0-3712-4ba5-9394-eb2d0af087b9\") " pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.486729 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:00 crc kubenswrapper[5121]: I0126 00:18:00.765269 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-fmfw6"] Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.255035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" event={"ID":"cf51cfa0-3712-4ba5-9394-eb2d0af087b9","Type":"ContainerStarted","Data":"9c094620f6310fa890c21a5d445c62444562f6710f042f7212265c70a3a7d358"} Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.802278 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.803382 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.803535 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.804633 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:18:01 crc kubenswrapper[5121]: I0126 00:18:01.804949 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6" gracePeriod=600 Jan 26 00:18:02 crc kubenswrapper[5121]: I0126 00:18:02.265792 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6" exitCode=0 Jan 26 00:18:02 crc kubenswrapper[5121]: I0126 00:18:02.266322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6"} Jan 26 00:18:02 crc kubenswrapper[5121]: I0126 00:18:02.266361 5121 scope.go:117] "RemoveContainer" containerID="121715febe285b0cd53762d792b1e46046f0843af04ecfb809633b61a008898d" Jan 26 00:18:03 crc kubenswrapper[5121]: I0126 00:18:03.276747 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5"} Jan 26 00:18:07 crc kubenswrapper[5121]: I0126 00:18:07.167409 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-cwbx9" Jan 26 00:18:07 crc kubenswrapper[5121]: I0126 00:18:07.192822 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-cwbx9" Jan 26 00:18:07 crc kubenswrapper[5121]: I0126 00:18:07.330680 5121 generic.go:358] "Generic (PLEG): container finished" podID="cf51cfa0-3712-4ba5-9394-eb2d0af087b9" containerID="080396410d3f9f1b10f9edd791b7580db8f0ce2ff8a0172f6d315d0997af7a4f" exitCode=0 Jan 26 00:18:07 crc kubenswrapper[5121]: I0126 00:18:07.330847 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" event={"ID":"cf51cfa0-3712-4ba5-9394-eb2d0af087b9","Type":"ContainerDied","Data":"080396410d3f9f1b10f9edd791b7580db8f0ce2ff8a0172f6d315d0997af7a4f"} Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.195296 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:13:07 +0000 UTC" deadline="2026-02-17 11:10:34.03610174 +0000 UTC" Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.195738 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="538h52m25.8403684s" Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.699132 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.798051 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8pq\" (UniqueName: \"kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq\") pod \"cf51cfa0-3712-4ba5-9394-eb2d0af087b9\" (UID: \"cf51cfa0-3712-4ba5-9394-eb2d0af087b9\") " Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.806913 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq" (OuterVolumeSpecName: "kube-api-access-qz8pq") pod "cf51cfa0-3712-4ba5-9394-eb2d0af087b9" (UID: "cf51cfa0-3712-4ba5-9394-eb2d0af087b9"). InnerVolumeSpecName "kube-api-access-qz8pq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:18:08 crc kubenswrapper[5121]: I0126 00:18:08.900063 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qz8pq\" (UniqueName: \"kubernetes.io/projected/cf51cfa0-3712-4ba5-9394-eb2d0af087b9-kube-api-access-qz8pq\") on node \"crc\" DevicePath \"\"" Jan 26 00:18:09 crc kubenswrapper[5121]: I0126 00:18:09.197137 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-25 00:13:07 +0000 UTC" deadline="2026-02-19 09:39:20.144506498 +0000 UTC" Jan 26 00:18:09 crc kubenswrapper[5121]: I0126 00:18:09.197218 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="585h21m10.947305267s" Jan 26 00:18:09 crc kubenswrapper[5121]: I0126 00:18:09.346440 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" event={"ID":"cf51cfa0-3712-4ba5-9394-eb2d0af087b9","Type":"ContainerDied","Data":"9c094620f6310fa890c21a5d445c62444562f6710f042f7212265c70a3a7d358"} Jan 26 00:18:09 crc kubenswrapper[5121]: I0126 00:18:09.346495 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489778-fmfw6" Jan 26 00:18:09 crc kubenswrapper[5121]: I0126 00:18:09.346498 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c094620f6310fa890c21a5d445c62444562f6710f042f7212265c70a3a7d358" Jan 26 00:18:30 crc kubenswrapper[5121]: I0126 00:18:30.541682 5121 scope.go:117] "RemoveContainer" containerID="2e6e9a2057090b684b4d29ad44490a73a04d9bf56c9140f768603106cd0c626a" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.636563 5121 scope.go:117] "RemoveContainer" containerID="7db56819b61aa14d090680629effc116c63fb84cb9c1e3c6d7996857393e06cc" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.678843 5121 scope.go:117] "RemoveContainer" containerID="bca443ca007878020ad10ee6523bc5b982f0b94e3f17bf9630034efb5c9887da" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.692952 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.695309 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.711020 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.712676 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.714868 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:19:30 crc kubenswrapper[5121]: I0126 00:19:30.715750 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:20:00 crc kubenswrapper[5121]: I0126 00:20:00.136646 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489780-fdl6m"] Jan 26 00:20:00 crc kubenswrapper[5121]: I0126 00:20:00.138520 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf51cfa0-3712-4ba5-9394-eb2d0af087b9" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5121]: I0126 00:20:00.138543 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf51cfa0-3712-4ba5-9394-eb2d0af087b9" containerName="oc" Jan 26 00:20:00 crc kubenswrapper[5121]: I0126 00:20:00.138659 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf51cfa0-3712-4ba5-9394-eb2d0af087b9" containerName="oc" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.423163 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.427293 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.428657 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.429783 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.432941 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-fdl6m"] Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.459217 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsn5m\" (UniqueName: \"kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m\") pod \"auto-csr-approver-29489780-fdl6m\" (UID: \"c993c292-35cf-45f2-8be9-beb81e25a150\") " pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.560775 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gsn5m\" (UniqueName: \"kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m\") pod \"auto-csr-approver-29489780-fdl6m\" (UID: \"c993c292-35cf-45f2-8be9-beb81e25a150\") " pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.592516 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsn5m\" (UniqueName: \"kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m\") pod \"auto-csr-approver-29489780-fdl6m\" (UID: \"c993c292-35cf-45f2-8be9-beb81e25a150\") " pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:03 crc kubenswrapper[5121]: I0126 00:20:03.763396 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:04 crc kubenswrapper[5121]: I0126 00:20:04.045858 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-fdl6m"] Jan 26 00:20:04 crc kubenswrapper[5121]: I0126 00:20:04.062141 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:20:04 crc kubenswrapper[5121]: I0126 00:20:04.413823 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" event={"ID":"c993c292-35cf-45f2-8be9-beb81e25a150","Type":"ContainerStarted","Data":"1f94ecaafd8e14a55c2f19925e821803749da41caaf2e2414676e521aa1540cb"} Jan 26 00:20:06 crc kubenswrapper[5121]: I0126 00:20:06.430893 5121 generic.go:358] "Generic (PLEG): container finished" podID="c993c292-35cf-45f2-8be9-beb81e25a150" containerID="46e36e94ccf4bd052f15b1eb1bc83912554716c0cceaf42d3a3db1c1f758e192" exitCode=0 Jan 26 00:20:06 crc kubenswrapper[5121]: I0126 00:20:06.430976 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" event={"ID":"c993c292-35cf-45f2-8be9-beb81e25a150","Type":"ContainerDied","Data":"46e36e94ccf4bd052f15b1eb1bc83912554716c0cceaf42d3a3db1c1f758e192"} Jan 26 00:20:07 crc kubenswrapper[5121]: I0126 00:20:07.662885 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:07 crc kubenswrapper[5121]: I0126 00:20:07.726979 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsn5m\" (UniqueName: \"kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m\") pod \"c993c292-35cf-45f2-8be9-beb81e25a150\" (UID: \"c993c292-35cf-45f2-8be9-beb81e25a150\") " Jan 26 00:20:07 crc kubenswrapper[5121]: I0126 00:20:07.735426 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m" (OuterVolumeSpecName: "kube-api-access-gsn5m") pod "c993c292-35cf-45f2-8be9-beb81e25a150" (UID: "c993c292-35cf-45f2-8be9-beb81e25a150"). InnerVolumeSpecName "kube-api-access-gsn5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:20:07 crc kubenswrapper[5121]: I0126 00:20:07.829739 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gsn5m\" (UniqueName: \"kubernetes.io/projected/c993c292-35cf-45f2-8be9-beb81e25a150-kube-api-access-gsn5m\") on node \"crc\" DevicePath \"\"" Jan 26 00:20:08 crc kubenswrapper[5121]: I0126 00:20:08.448906 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" event={"ID":"c993c292-35cf-45f2-8be9-beb81e25a150","Type":"ContainerDied","Data":"1f94ecaafd8e14a55c2f19925e821803749da41caaf2e2414676e521aa1540cb"} Jan 26 00:20:08 crc kubenswrapper[5121]: I0126 00:20:08.448953 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489780-fdl6m" Jan 26 00:20:08 crc kubenswrapper[5121]: I0126 00:20:08.448995 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f94ecaafd8e14a55c2f19925e821803749da41caaf2e2414676e521aa1540cb" Jan 26 00:20:31 crc kubenswrapper[5121]: I0126 00:20:31.802027 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:31 crc kubenswrapper[5121]: I0126 00:20:31.802922 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:01 crc kubenswrapper[5121]: I0126 00:21:01.802438 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:21:01 crc kubenswrapper[5121]: I0126 00:21:01.805012 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:31 crc kubenswrapper[5121]: I0126 00:21:31.802438 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:21:31 crc kubenswrapper[5121]: I0126 00:21:31.803479 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:31 crc kubenswrapper[5121]: I0126 00:21:31.803551 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:21:31 crc kubenswrapper[5121]: I0126 00:21:31.805023 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:21:31 crc kubenswrapper[5121]: I0126 00:21:31.805087 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5" gracePeriod=600 Jan 26 00:21:32 crc kubenswrapper[5121]: I0126 00:21:32.109276 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5" exitCode=0 Jan 26 00:21:32 crc kubenswrapper[5121]: I0126 00:21:32.109865 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5"} Jan 26 00:21:32 crc kubenswrapper[5121]: I0126 00:21:32.109921 5121 scope.go:117] "RemoveContainer" containerID="f39a6654dbb57a32f87d3c6d5c0d5216f516cfa8d25596ac86a0268ff2b003c6" Jan 26 00:21:33 crc kubenswrapper[5121]: I0126 00:21:33.121320 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1"} Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.146469 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489782-74q2r"] Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.147983 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c993c292-35cf-45f2-8be9-beb81e25a150" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.148001 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c993c292-35cf-45f2-8be9-beb81e25a150" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.148121 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c993c292-35cf-45f2-8be9-beb81e25a150" containerName="oc" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.153604 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.154548 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-74q2r"] Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.157442 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.157598 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.158857 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.300935 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj77v\" (UniqueName: \"kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v\") pod \"auto-csr-approver-29489782-74q2r\" (UID: \"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d\") " pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.402967 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rj77v\" (UniqueName: \"kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v\") pod \"auto-csr-approver-29489782-74q2r\" (UID: \"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d\") " pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.429892 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj77v\" (UniqueName: \"kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v\") pod \"auto-csr-approver-29489782-74q2r\" (UID: \"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d\") " pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.474715 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:00 crc kubenswrapper[5121]: I0126 00:22:00.895116 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-74q2r"] Jan 26 00:22:01 crc kubenswrapper[5121]: I0126 00:22:01.322523 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-74q2r" event={"ID":"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d","Type":"ContainerStarted","Data":"d199a1b85dec0ad6fb7cc08680d58e42241d96be03591a6c66b09ec8fca175e4"} Jan 26 00:22:02 crc kubenswrapper[5121]: I0126 00:22:02.334456 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-74q2r" event={"ID":"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d","Type":"ContainerStarted","Data":"b337398270acf844d03020cb36ac67644d9d315b83dab3c543afbd4d159f1560"} Jan 26 00:22:03 crc kubenswrapper[5121]: I0126 00:22:03.345485 5121 generic.go:358] "Generic (PLEG): container finished" podID="5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" containerID="b337398270acf844d03020cb36ac67644d9d315b83dab3c543afbd4d159f1560" exitCode=0 Jan 26 00:22:03 crc kubenswrapper[5121]: I0126 00:22:03.345567 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-74q2r" event={"ID":"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d","Type":"ContainerDied","Data":"b337398270acf844d03020cb36ac67644d9d315b83dab3c543afbd4d159f1560"} Jan 26 00:22:04 crc kubenswrapper[5121]: I0126 00:22:04.578597 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:04 crc kubenswrapper[5121]: I0126 00:22:04.623407 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj77v\" (UniqueName: \"kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v\") pod \"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d\" (UID: \"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d\") " Jan 26 00:22:04 crc kubenswrapper[5121]: I0126 00:22:04.639933 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v" (OuterVolumeSpecName: "kube-api-access-rj77v") pod "5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" (UID: "5a3bd5ec-bc60-4df9-af1d-f70c63c5681d"). InnerVolumeSpecName "kube-api-access-rj77v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:04 crc kubenswrapper[5121]: I0126 00:22:04.725537 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rj77v\" (UniqueName: \"kubernetes.io/projected/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d-kube-api-access-rj77v\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:05 crc kubenswrapper[5121]: I0126 00:22:05.361660 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489782-74q2r" Jan 26 00:22:05 crc kubenswrapper[5121]: I0126 00:22:05.361673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489782-74q2r" event={"ID":"5a3bd5ec-bc60-4df9-af1d-f70c63c5681d","Type":"ContainerDied","Data":"d199a1b85dec0ad6fb7cc08680d58e42241d96be03591a6c66b09ec8fca175e4"} Jan 26 00:22:05 crc kubenswrapper[5121]: I0126 00:22:05.361742 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d199a1b85dec0ad6fb7cc08680d58e42241d96be03591a6c66b09ec8fca175e4" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.323980 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm"] Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.326116 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="kube-rbac-proxy" containerID="cri-o://c2f6c1d726e6ebd73f2b63b399de8f4f6ec7ef40be7ae7ffde7cd8dca5f021d7" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.326337 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="ovnkube-cluster-manager" containerID="cri-o://d252159539b6aa936348da8f7545cfcc9b6f0803a26ced328848eb5eb54e106b" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.545268 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7l6td"] Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.546463 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-controller" containerID="cri-o://bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.546883 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="northd" containerID="cri-o://7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.547513 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="sbdb" containerID="cri-o://c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.547475 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-node" containerID="cri-o://82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.547594 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="nbdb" containerID="cri-o://7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.547645 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.547681 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-acl-logging" containerID="cri-o://7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.555617 5121 generic.go:358] "Generic (PLEG): container finished" podID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerID="d252159539b6aa936348da8f7545cfcc9b6f0803a26ced328848eb5eb54e106b" exitCode=0 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.555659 5121 generic.go:358] "Generic (PLEG): container finished" podID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerID="c2f6c1d726e6ebd73f2b63b399de8f4f6ec7ef40be7ae7ffde7cd8dca5f021d7" exitCode=0 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.556135 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerDied","Data":"d252159539b6aa936348da8f7545cfcc9b6f0803a26ced328848eb5eb54e106b"} Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.556186 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerDied","Data":"c2f6c1d726e6ebd73f2b63b399de8f4f6ec7ef40be7ae7ffde7cd8dca5f021d7"} Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.556204 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" event={"ID":"a042b0d8-0b7b-4790-a026-e24e2f1426ae","Type":"ContainerDied","Data":"94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6"} Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.556218 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f68dad17f43462f570bc5800ba5645d24462429290989dc9f8f7565e8152d6" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.582873 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovnkube-controller" containerID="cri-o://5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" gracePeriod=30 Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.726559 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761098 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85"] Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761841 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="ovnkube-cluster-manager" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761864 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="ovnkube-cluster-manager" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761879 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="kube-rbac-proxy" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761887 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="kube-rbac-proxy" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761894 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" containerName="oc" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.761900 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" containerName="oc" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.762023 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="kube-rbac-proxy" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.762036 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" containerName="oc" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.762044 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" containerName="ovnkube-cluster-manager" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.765943 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.770231 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides\") pod \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.770410 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7twmw\" (UniqueName: \"kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw\") pod \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.770495 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config\") pod \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.770587 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert\") pod \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\" (UID: \"a042b0d8-0b7b-4790-a026-e24e2f1426ae\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.771461 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a042b0d8-0b7b-4790-a026-e24e2f1426ae" (UID: "a042b0d8-0b7b-4790-a026-e24e2f1426ae"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.771752 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a042b0d8-0b7b-4790-a026-e24e2f1426ae" (UID: "a042b0d8-0b7b-4790-a026-e24e2f1426ae"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.779699 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "a042b0d8-0b7b-4790-a026-e24e2f1426ae" (UID: "a042b0d8-0b7b-4790-a026-e24e2f1426ae"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.780110 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw" (OuterVolumeSpecName: "kube-api-access-7twmw") pod "a042b0d8-0b7b-4790-a026-e24e2f1426ae" (UID: "a042b0d8-0b7b-4790-a026-e24e2f1426ae"). InnerVolumeSpecName "kube-api-access-7twmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.872309 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.872461 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.872540 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.872953 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpgj\" (UniqueName: \"kubernetes.io/projected/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-kube-api-access-2tpgj\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.873108 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7twmw\" (UniqueName: \"kubernetes.io/projected/a042b0d8-0b7b-4790-a026-e24e2f1426ae-kube-api-access-7twmw\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.873127 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.873140 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a042b0d8-0b7b-4790-a026-e24e2f1426ae-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.873156 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a042b0d8-0b7b-4790-a026-e24e2f1426ae-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.899814 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7l6td_c13c9422-5f83-40d0-bb0f-3055101ae2ba/ovn-acl-logging/0.log" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.900728 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7l6td_c13c9422-5f83-40d0-bb0f-3055101ae2ba/ovn-controller/0.log" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.901328 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.964648 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h5hzr"] Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965342 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="nbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965365 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="nbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965396 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="northd" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965403 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="northd" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965412 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="sbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965419 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="sbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965429 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kubecfg-setup" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965436 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kubecfg-setup" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965445 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965452 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965461 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-node" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965467 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-node" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965478 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965484 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965493 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-acl-logging" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965498 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-acl-logging" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965511 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovnkube-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965518 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovnkube-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965631 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="nbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965645 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="northd" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965657 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965666 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965675 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovn-acl-logging" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965683 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="kube-rbac-proxy-node" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965690 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="ovnkube-controller" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.965697 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerName="sbdb" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.973393 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974671 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974711 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974801 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974823 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974838 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974857 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974885 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974915 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974939 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974968 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975005 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974940 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log" (OuterVolumeSpecName: "node-log") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975084 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.974945 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975001 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975111 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975036 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975135 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket" (OuterVolumeSpecName: "log-socket") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975143 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975934 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.975059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978064 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978107 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978141 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978168 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978174 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978233 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978262 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978251 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978288 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978325 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978362 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978420 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978667 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.978714 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash" (OuterVolumeSpecName: "host-slash") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.979737 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet\") pod \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\" (UID: \"c13c9422-5f83-40d0-bb0f-3055101ae2ba\") " Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.979806 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.979985 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.980048 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.980268 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpgj\" (UniqueName: \"kubernetes.io/projected/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-kube-api-access-2tpgj\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.980407 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.980737 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.980880 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.982582 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj" (OuterVolumeSpecName: "kube-api-access-4jbmj") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "kube-api-access-4jbmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985722 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985752 5121 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985775 5121 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985785 5121 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985797 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985808 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jbmj\" (UniqueName: \"kubernetes.io/projected/c13c9422-5f83-40d0-bb0f-3055101ae2ba-kube-api-access-4jbmj\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985818 5121 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985826 5121 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985835 5121 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985844 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985858 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985873 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985889 5121 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985900 5121 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985910 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985920 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c13c9422-5f83-40d0-bb0f-3055101ae2ba-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985932 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985945 5121 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.985959 5121 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.986134 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:31 crc kubenswrapper[5121]: I0126 00:22:31.986353 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.000285 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c13c9422-5f83-40d0-bb0f-3055101ae2ba" (UID: "c13c9422-5f83-40d0-bb0f-3055101ae2ba"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.000891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpgj\" (UniqueName: \"kubernetes.io/projected/fc262b85-8e16-4eaa-9a26-e6d3ceee00d5-kube-api-access-2tpgj\") pod \"ovnkube-control-plane-97c9b6c48-p9q85\" (UID: \"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087034 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087102 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-log-socket\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087233 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-config\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087412 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087466 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-env-overrides\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087505 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-systemd-units\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087552 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-var-lib-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087653 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087700 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-node-log\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087734 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-kubelet\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.087926 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-systemd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088042 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-bin\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088064 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-netd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088116 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-slash\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088156 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-script-lib\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088186 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-etc-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088219 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-netns\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088257 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9c7\" (UniqueName: \"kubernetes.io/projected/f1e47824-59d2-4724-89be-c8e9381cc29b-kube-api-access-kb9c7\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088281 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-ovn\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088357 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1e47824-59d2-4724-89be-c8e9381cc29b-ovn-node-metrics-cert\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.088480 5121 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c13c9422-5f83-40d0-bb0f-3055101ae2ba-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.168847 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190279 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-env-overrides\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190372 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-systemd-units\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190405 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-var-lib-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190431 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190458 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-node-log\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190485 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-kubelet\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190514 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-systemd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190551 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-bin\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190582 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-netd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190612 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-slash\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190642 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-script-lib\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-etc-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190694 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-netns\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190725 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9c7\" (UniqueName: \"kubernetes.io/projected/f1e47824-59d2-4724-89be-c8e9381cc29b-kube-api-access-kb9c7\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190751 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-ovn\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1e47824-59d2-4724-89be-c8e9381cc29b-ovn-node-metrics-cert\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190865 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190898 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-log-socket\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190926 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-config\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.190974 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191076 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191127 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-systemd-units\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191162 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-var-lib-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191188 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191220 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-node-log\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191254 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-kubelet\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191288 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-systemd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-bin\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191344 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-cni-netd\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191371 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-slash\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191554 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-env-overrides\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191664 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-run-ovn\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.191794 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-run-netns\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.192205 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-script-lib\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.192397 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.192714 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-etc-openvswitch\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.192862 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f1e47824-59d2-4724-89be-c8e9381cc29b-log-socket\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.193386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f1e47824-59d2-4724-89be-c8e9381cc29b-ovnkube-config\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.197403 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f1e47824-59d2-4724-89be-c8e9381cc29b-ovn-node-metrics-cert\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.211905 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9c7\" (UniqueName: \"kubernetes.io/projected/f1e47824-59d2-4724-89be-c8e9381cc29b-kube-api-access-kb9c7\") pod \"ovnkube-node-h5hzr\" (UID: \"f1e47824-59d2-4724-89be-c8e9381cc29b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.301193 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:32 crc kubenswrapper[5121]: W0126 00:22:32.345279 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e47824_59d2_4724_89be_c8e9381cc29b.slice/crio-3e5b226e68ddc252a19bb00efbdc2b9adbec9e6f26cd1a5b48f7fab70567fd64 WatchSource:0}: Error finding container 3e5b226e68ddc252a19bb00efbdc2b9adbec9e6f26cd1a5b48f7fab70567fd64: Status 404 returned error can't find the container with id 3e5b226e68ddc252a19bb00efbdc2b9adbec9e6f26cd1a5b48f7fab70567fd64 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.565524 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" event={"ID":"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5","Type":"ContainerStarted","Data":"426562b3dcd80a2d2c57c1486b104904f55e4bb004ca810b4c5f6927e03d2b81"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.568306 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.568348 5121 generic.go:358] "Generic (PLEG): container finished" podID="21d6bae8-c026-4b2f-9127-ca53977e50d8" containerID="8beee9011422d1e0a77b616cac7a91b910641fa0f718b4aef38907b73f462cf9" exitCode=2 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.568497 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhg6w" event={"ID":"21d6bae8-c026-4b2f-9127-ca53977e50d8","Type":"ContainerDied","Data":"8beee9011422d1e0a77b616cac7a91b910641fa0f718b4aef38907b73f462cf9"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.569586 5121 scope.go:117] "RemoveContainer" containerID="8beee9011422d1e0a77b616cac7a91b910641fa0f718b4aef38907b73f462cf9" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.574211 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7l6td_c13c9422-5f83-40d0-bb0f-3055101ae2ba/ovn-acl-logging/0.log" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.575817 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7l6td_c13c9422-5f83-40d0-bb0f-3055101ae2ba/ovn-controller/0.log" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576337 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576366 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576375 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576382 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576388 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576414 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576422 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" exitCode=143 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576428 5121 generic.go:358] "Generic (PLEG): container finished" podID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" exitCode=143 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576536 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576592 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576606 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576625 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576674 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576686 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576698 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576749 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576776 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576784 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576794 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576804 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576811 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576817 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576822 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576828 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576834 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576840 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576845 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576865 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576874 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576887 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576893 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576898 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576902 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576907 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576912 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576917 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576922 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576926 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576934 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7l6td" event={"ID":"c13c9422-5f83-40d0-bb0f-3055101ae2ba","Type":"ContainerDied","Data":"7b5541cbee6f4b93c96d7abb2d6b41119bc6cc2de7a940af0a248ff8cd825692"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576944 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576950 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576955 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576960 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576965 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576970 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576975 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576980 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.576985 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.577002 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.580778 5121 generic.go:358] "Generic (PLEG): container finished" podID="f1e47824-59d2-4724-89be-c8e9381cc29b" containerID="465e5eaff39809b25f9ab68d7d506f85059fb6eb6164971eff54f2e612b65885" exitCode=0 Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.580835 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerDied","Data":"465e5eaff39809b25f9ab68d7d506f85059fb6eb6164971eff54f2e612b65885"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.580912 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"3e5b226e68ddc252a19bb00efbdc2b9adbec9e6f26cd1a5b48f7fab70567fd64"} Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.581070 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.653273 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.675609 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7l6td"] Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.684359 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7l6td"] Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.685948 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.694910 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm"] Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.695062 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-2hvlm"] Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.730134 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.770431 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.796182 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.814818 5121 scope.go:117] "RemoveContainer" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.833293 5121 scope.go:117] "RemoveContainer" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.854315 5121 scope.go:117] "RemoveContainer" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.877691 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.878297 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.878363 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} err="failed to get container status \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.878434 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.878943 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.878987 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} err="failed to get container status \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.879023 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.879367 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.879398 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} err="failed to get container status \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.879415 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.879729 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.879754 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} err="failed to get container status \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.879831 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.880140 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.880167 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} err="failed to get container status \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.880189 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.880464 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.880502 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} err="failed to get container status \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.880520 5121 scope.go:117] "RemoveContainer" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.881652 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": container with ID starting with 7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec not found: ID does not exist" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.881683 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} err="failed to get container status \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": rpc error: code = NotFound desc = could not find container \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": container with ID starting with 7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.881704 5121 scope.go:117] "RemoveContainer" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.882014 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": container with ID starting with bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd not found: ID does not exist" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.882042 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} err="failed to get container status \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": rpc error: code = NotFound desc = could not find container \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": container with ID starting with bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.882066 5121 scope.go:117] "RemoveContainer" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: E0126 00:22:32.882441 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": container with ID starting with c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da not found: ID does not exist" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.882476 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} err="failed to get container status \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": rpc error: code = NotFound desc = could not find container \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": container with ID starting with c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.882505 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.883068 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} err="failed to get container status \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.883093 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.883462 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} err="failed to get container status \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.883526 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.884092 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} err="failed to get container status \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.884119 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.884503 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} err="failed to get container status \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.884554 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.885064 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} err="failed to get container status \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.885092 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.885751 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} err="failed to get container status \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.885826 5121 scope.go:117] "RemoveContainer" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886125 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} err="failed to get container status \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": rpc error: code = NotFound desc = could not find container \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": container with ID starting with 7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886157 5121 scope.go:117] "RemoveContainer" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886390 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} err="failed to get container status \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": rpc error: code = NotFound desc = could not find container \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": container with ID starting with bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886414 5121 scope.go:117] "RemoveContainer" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886695 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} err="failed to get container status \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": rpc error: code = NotFound desc = could not find container \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": container with ID starting with c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.886717 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.887491 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} err="failed to get container status \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.887553 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888028 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} err="failed to get container status \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888053 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888582 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} err="failed to get container status \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888605 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888948 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} err="failed to get container status \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.888971 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889216 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} err="failed to get container status \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889237 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889456 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} err="failed to get container status \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889481 5121 scope.go:117] "RemoveContainer" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889720 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} err="failed to get container status \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": rpc error: code = NotFound desc = could not find container \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": container with ID starting with 7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.889741 5121 scope.go:117] "RemoveContainer" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890003 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} err="failed to get container status \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": rpc error: code = NotFound desc = could not find container \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": container with ID starting with bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890024 5121 scope.go:117] "RemoveContainer" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890257 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} err="failed to get container status \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": rpc error: code = NotFound desc = could not find container \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": container with ID starting with c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890278 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890471 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} err="failed to get container status \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890514 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890746 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} err="failed to get container status \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.890785 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891096 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} err="failed to get container status \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891134 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891419 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} err="failed to get container status \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891447 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891723 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} err="failed to get container status \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.891746 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892058 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} err="failed to get container status \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892079 5121 scope.go:117] "RemoveContainer" containerID="7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892328 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec"} err="failed to get container status \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": rpc error: code = NotFound desc = could not find container \"7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec\": container with ID starting with 7f09a4b10a57a4587fca1b8aa04e7e0be550c287dec482447425adb0edb946ec not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892355 5121 scope.go:117] "RemoveContainer" containerID="bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892611 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd"} err="failed to get container status \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": rpc error: code = NotFound desc = could not find container \"bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd\": container with ID starting with bb1af6f4e27c5f27cac62beced5a8dd4f62701dade94f71b68d2e5c9e0c1c7fd not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892648 5121 scope.go:117] "RemoveContainer" containerID="c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892972 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da"} err="failed to get container status \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": rpc error: code = NotFound desc = could not find container \"c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da\": container with ID starting with c090aa752991b48954f45c9aec440c850eddf71b5c7fa9e7ebdf37a74386d2da not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.892994 5121 scope.go:117] "RemoveContainer" containerID="5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893235 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060"} err="failed to get container status \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": rpc error: code = NotFound desc = could not find container \"5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060\": container with ID starting with 5205b9173fd2761d56b0eadf02f2a0f2d9ff55127a812138e07432b40cda8060 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893254 5121 scope.go:117] "RemoveContainer" containerID="c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893598 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c"} err="failed to get container status \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": rpc error: code = NotFound desc = could not find container \"c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c\": container with ID starting with c6585c076dabcd4e4042fa4ed1c3fc2b13c30a1cf31549bd8a12db15e632930c not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893626 5121 scope.go:117] "RemoveContainer" containerID="7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893945 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e"} err="failed to get container status \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": rpc error: code = NotFound desc = could not find container \"7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e\": container with ID starting with 7e3e1df5e08a8e738cda501954568f3985a99a95d862adcc439caeeea0ab382e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.893976 5121 scope.go:117] "RemoveContainer" containerID="7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.894244 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7"} err="failed to get container status \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": rpc error: code = NotFound desc = could not find container \"7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7\": container with ID starting with 7a24251e014b57d6ccb36a3b5a2a67bac5db228c1c709904bb4f2b1ac1d4f1e7 not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.894270 5121 scope.go:117] "RemoveContainer" containerID="39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.894520 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e"} err="failed to get container status \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": rpc error: code = NotFound desc = could not find container \"39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e\": container with ID starting with 39c71722d38cf5eabd2371e71a4715803f8697a1d490ce18c76484d99b65792e not found: ID does not exist" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.894540 5121 scope.go:117] "RemoveContainer" containerID="82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb" Jan 26 00:22:32 crc kubenswrapper[5121]: I0126 00:22:32.894779 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb"} err="failed to get container status \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": rpc error: code = NotFound desc = could not find container \"82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb\": container with ID starting with 82c55da089bc561faff7583fc1fbd5cc8a1a191025d1add84d357874be4d5abb not found: ID does not exist" Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.591033 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" event={"ID":"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5","Type":"ContainerStarted","Data":"6c17b9010dac32ff254d1e769686951caad9e3582211b4ac7727436be97641da"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.591106 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" event={"ID":"fc262b85-8e16-4eaa-9a26-e6d3ceee00d5","Type":"ContainerStarted","Data":"59417c79628b1cf092f893537da8aa70a5f8682f715473454e36d305e0cdf2fc"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.595934 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.596070 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhg6w" event={"ID":"21d6bae8-c026-4b2f-9127-ca53977e50d8","Type":"ContainerStarted","Data":"39c508b9fb465f6edf4ba9c45247893025118c37ba87aaf12178964f68ca4bee"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603118 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"a0b30f12c68a415b2dcb4c8850d6528b34e12781c4543d717fb6c74ecb1f58bb"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603152 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"48baa00fb47dcc5f6594cb6937ccc48f8f854cd5d43d9f2929f15159f2369632"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603163 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"80ee16e7ee947536736156cbacb6bf0651662eb8e400238ce7883fedd8aa0ea1"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603174 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"19f80d8a73d7bb309a7f13a4fda47eb8ef76125d167f59b547a086e9ebcac18e"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603186 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"8d916caa9f4128fb0aa0179ae0c7eac11898f0bce95202df00a560285ef82a67"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.603197 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"7b140b60327dc66d74f187a0e19e6446de76124fb4bf161de33a3c71fd5d282a"} Jan 26 00:22:33 crc kubenswrapper[5121]: I0126 00:22:33.643909 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-p9q85" podStartSLOduration=2.643877106 podStartE2EDuration="2.643877106s" podCreationTimestamp="2026-01-26 00:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:33.618427258 +0000 UTC m=+784.777628383" watchObservedRunningTime="2026-01-26 00:22:33.643877106 +0000 UTC m=+784.803078251" Jan 26 00:22:34 crc kubenswrapper[5121]: I0126 00:22:34.269163 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a042b0d8-0b7b-4790-a026-e24e2f1426ae" path="/var/lib/kubelet/pods/a042b0d8-0b7b-4790-a026-e24e2f1426ae/volumes" Jan 26 00:22:34 crc kubenswrapper[5121]: I0126 00:22:34.270587 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13c9422-5f83-40d0-bb0f-3055101ae2ba" path="/var/lib/kubelet/pods/c13c9422-5f83-40d0-bb0f-3055101ae2ba/volumes" Jan 26 00:22:36 crc kubenswrapper[5121]: I0126 00:22:36.627036 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"fa7f10b4f78c4c3c4333baaf05de04539734d63e1fc35dc0f76934c4139c116d"} Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.652206 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" event={"ID":"f1e47824-59d2-4724-89be-c8e9381cc29b","Type":"ContainerStarted","Data":"c3fe9b46f8f67ed2a08ce7709a9060d4adc05aeb9e26a43acb8caca3ecf9e46f"} Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.652691 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.652708 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.652719 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.715473 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" podStartSLOduration=7.715447851 podStartE2EDuration="7.715447851s" podCreationTimestamp="2026-01-26 00:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:22:38.68678783 +0000 UTC m=+789.845988975" watchObservedRunningTime="2026-01-26 00:22:38.715447851 +0000 UTC m=+789.874648976" Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.718196 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:22:38 crc kubenswrapper[5121]: I0126 00:22:38.724862 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:23:10 crc kubenswrapper[5121]: I0126 00:23:10.697248 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5hzr" Jan 26 00:23:30 crc kubenswrapper[5121]: I0126 00:23:30.803135 5121 scope.go:117] "RemoveContainer" containerID="c2f6c1d726e6ebd73f2b63b399de8f4f6ec7ef40be7ae7ffde7cd8dca5f021d7" Jan 26 00:23:30 crc kubenswrapper[5121]: I0126 00:23:30.830493 5121 scope.go:117] "RemoveContainer" containerID="d252159539b6aa936348da8f7545cfcc9b6f0803a26ced328848eb5eb54e106b" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.140093 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489784-dgk4j"] Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.259501 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.263019 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.266368 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.266790 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.267383 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-dgk4j"] Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.392196 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwtp8\" (UniqueName: \"kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8\") pod \"auto-csr-approver-29489784-dgk4j\" (UID: \"d88412b4-4371-426f-85fb-313e41c1e075\") " pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.493005 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwtp8\" (UniqueName: \"kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8\") pod \"auto-csr-approver-29489784-dgk4j\" (UID: \"d88412b4-4371-426f-85fb-313e41c1e075\") " pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.515870 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwtp8\" (UniqueName: \"kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8\") pod \"auto-csr-approver-29489784-dgk4j\" (UID: \"d88412b4-4371-426f-85fb-313e41c1e075\") " pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.576044 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:00 crc kubenswrapper[5121]: I0126 00:24:00.809489 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-dgk4j"] Jan 26 00:24:01 crc kubenswrapper[5121]: I0126 00:24:01.385410 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" event={"ID":"d88412b4-4371-426f-85fb-313e41c1e075","Type":"ContainerStarted","Data":"1beb49916d02f3a3257e5b40ba88edd84dd914db13e146770ae037e7932996ef"} Jan 26 00:24:01 crc kubenswrapper[5121]: I0126 00:24:01.802069 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:01 crc kubenswrapper[5121]: I0126 00:24:01.802155 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:03 crc kubenswrapper[5121]: I0126 00:24:03.401939 5121 generic.go:358] "Generic (PLEG): container finished" podID="d88412b4-4371-426f-85fb-313e41c1e075" containerID="ba6c23d2c03ddb6b18f94c59cecf85f17e1ee884b123e109fd422111d1f0f35e" exitCode=0 Jan 26 00:24:03 crc kubenswrapper[5121]: I0126 00:24:03.402136 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" event={"ID":"d88412b4-4371-426f-85fb-313e41c1e075","Type":"ContainerDied","Data":"ba6c23d2c03ddb6b18f94c59cecf85f17e1ee884b123e109fd422111d1f0f35e"} Jan 26 00:24:04 crc kubenswrapper[5121]: I0126 00:24:04.643872 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:04 crc kubenswrapper[5121]: I0126 00:24:04.751826 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwtp8\" (UniqueName: \"kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8\") pod \"d88412b4-4371-426f-85fb-313e41c1e075\" (UID: \"d88412b4-4371-426f-85fb-313e41c1e075\") " Jan 26 00:24:04 crc kubenswrapper[5121]: I0126 00:24:04.757469 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8" (OuterVolumeSpecName: "kube-api-access-zwtp8") pod "d88412b4-4371-426f-85fb-313e41c1e075" (UID: "d88412b4-4371-426f-85fb-313e41c1e075"). InnerVolumeSpecName "kube-api-access-zwtp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:04 crc kubenswrapper[5121]: I0126 00:24:04.854126 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zwtp8\" (UniqueName: \"kubernetes.io/projected/d88412b4-4371-426f-85fb-313e41c1e075-kube-api-access-zwtp8\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:05 crc kubenswrapper[5121]: I0126 00:24:05.417754 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" Jan 26 00:24:05 crc kubenswrapper[5121]: I0126 00:24:05.417797 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489784-dgk4j" event={"ID":"d88412b4-4371-426f-85fb-313e41c1e075","Type":"ContainerDied","Data":"1beb49916d02f3a3257e5b40ba88edd84dd914db13e146770ae037e7932996ef"} Jan 26 00:24:05 crc kubenswrapper[5121]: I0126 00:24:05.417895 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1beb49916d02f3a3257e5b40ba88edd84dd914db13e146770ae037e7932996ef" Jan 26 00:24:05 crc kubenswrapper[5121]: I0126 00:24:05.713307 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-fmfw6"] Jan 26 00:24:05 crc kubenswrapper[5121]: I0126 00:24:05.717226 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489778-fmfw6"] Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.107548 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.107904 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9gd98" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="registry-server" containerID="cri-o://ef653cc8d3e3dc8a5f00cf6a0348088213a80f9a2100d96686c3fcaa73e6a7f6" gracePeriod=30 Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.262707 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf51cfa0-3712-4ba5-9394-eb2d0af087b9" path="/var/lib/kubelet/pods/cf51cfa0-3712-4ba5-9394-eb2d0af087b9/volumes" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.426084 5121 generic.go:358] "Generic (PLEG): container finished" podID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerID="ef653cc8d3e3dc8a5f00cf6a0348088213a80f9a2100d96686c3fcaa73e6a7f6" exitCode=0 Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.426155 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerDied","Data":"ef653cc8d3e3dc8a5f00cf6a0348088213a80f9a2100d96686c3fcaa73e6a7f6"} Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.464549 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.574718 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities\") pod \"242b1a88-f692-4c26-96bc-ee700a89fd4c\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.574773 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfkx\" (UniqueName: \"kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx\") pod \"242b1a88-f692-4c26-96bc-ee700a89fd4c\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.574839 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content\") pod \"242b1a88-f692-4c26-96bc-ee700a89fd4c\" (UID: \"242b1a88-f692-4c26-96bc-ee700a89fd4c\") " Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.576912 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities" (OuterVolumeSpecName: "utilities") pod "242b1a88-f692-4c26-96bc-ee700a89fd4c" (UID: "242b1a88-f692-4c26-96bc-ee700a89fd4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.581080 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx" (OuterVolumeSpecName: "kube-api-access-2cfkx") pod "242b1a88-f692-4c26-96bc-ee700a89fd4c" (UID: "242b1a88-f692-4c26-96bc-ee700a89fd4c"). InnerVolumeSpecName "kube-api-access-2cfkx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.588108 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "242b1a88-f692-4c26-96bc-ee700a89fd4c" (UID: "242b1a88-f692-4c26-96bc-ee700a89fd4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.676528 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.676565 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2cfkx\" (UniqueName: \"kubernetes.io/projected/242b1a88-f692-4c26-96bc-ee700a89fd4c-kube-api-access-2cfkx\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:06 crc kubenswrapper[5121]: I0126 00:24:06.676575 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242b1a88-f692-4c26-96bc-ee700a89fd4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.436988 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gd98" event={"ID":"242b1a88-f692-4c26-96bc-ee700a89fd4c","Type":"ContainerDied","Data":"2d12ebc0ce05a4d7bc048b7115e4c1fac791a05a465c24b290519a43663cb448"} Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.437015 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gd98" Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.437090 5121 scope.go:117] "RemoveContainer" containerID="ef653cc8d3e3dc8a5f00cf6a0348088213a80f9a2100d96686c3fcaa73e6a7f6" Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.459034 5121 scope.go:117] "RemoveContainer" containerID="2b70057252c7c3ee99260923d0180f2152bb0c14b5f286db03d9161c652e04f1" Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.488456 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.490841 5121 scope.go:117] "RemoveContainer" containerID="a1d42c5f0c55ab823f023b6c31b7379d0812ccb9c6c3d89d7b8f48339927e116" Jan 26 00:24:07 crc kubenswrapper[5121]: I0126 00:24:07.493107 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gd98"] Jan 26 00:24:08 crc kubenswrapper[5121]: I0126 00:24:08.264913 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" path="/var/lib/kubelet/pods/242b1a88-f692-4c26-96bc-ee700a89fd4c/volumes" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.921507 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd"] Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922412 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="extract-content" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922432 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="extract-content" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922454 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="registry-server" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922462 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="registry-server" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922488 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d88412b4-4371-426f-85fb-313e41c1e075" containerName="oc" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922497 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88412b4-4371-426f-85fb-313e41c1e075" containerName="oc" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922518 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="extract-utilities" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922527 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="extract-utilities" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922673 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="d88412b4-4371-426f-85fb-313e41c1e075" containerName="oc" Jan 26 00:24:09 crc kubenswrapper[5121]: I0126 00:24:09.922696 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="242b1a88-f692-4c26-96bc-ee700a89fd4c" containerName="registry-server" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.603709 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd"] Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.604061 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.606998 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.661627 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.661708 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.661774 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qvpl\" (UniqueName: \"kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.762719 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5qvpl\" (UniqueName: \"kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.763171 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.763209 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.763753 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.763821 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.785152 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qvpl\" (UniqueName: \"kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:10 crc kubenswrapper[5121]: I0126 00:24:10.921441 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:11 crc kubenswrapper[5121]: I0126 00:24:11.276854 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd"] Jan 26 00:24:11 crc kubenswrapper[5121]: I0126 00:24:11.471035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerStarted","Data":"a0aa8035540a7a0c8476b8cb018b5bbddf3ee5b92e0f436c9c1c29954434c28e"} Jan 26 00:24:11 crc kubenswrapper[5121]: I0126 00:24:11.471101 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerStarted","Data":"a9ef2efaa84f94bd0543d73eac4bf987e24a7b1025eae454aface698d1a73184"} Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.477160 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.483070 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.484626 5121 generic.go:358] "Generic (PLEG): container finished" podID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerID="a0aa8035540a7a0c8476b8cb018b5bbddf3ee5b92e0f436c9c1c29954434c28e" exitCode=0 Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.484745 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerDied","Data":"a0aa8035540a7a0c8476b8cb018b5bbddf3ee5b92e0f436c9c1c29954434c28e"} Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.496509 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.564623 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsvs\" (UniqueName: \"kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.564717 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.564813 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.665863 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.666065 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsvs\" (UniqueName: \"kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.666105 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.666572 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.666683 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.703117 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsvs\" (UniqueName: \"kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs\") pod \"redhat-operators-29mkz\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:12 crc kubenswrapper[5121]: I0126 00:24:12.813904 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:13 crc kubenswrapper[5121]: I0126 00:24:13.122058 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:13 crc kubenswrapper[5121]: I0126 00:24:13.495143 5121 generic.go:358] "Generic (PLEG): container finished" podID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerID="74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd" exitCode=0 Jan 26 00:24:13 crc kubenswrapper[5121]: I0126 00:24:13.495193 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerDied","Data":"74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd"} Jan 26 00:24:13 crc kubenswrapper[5121]: I0126 00:24:13.495271 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerStarted","Data":"c9fb06fe19907cbd96fdc1ec8e126508c5618ac28be2384ee1cd62645b1e6c71"} Jan 26 00:24:14 crc kubenswrapper[5121]: I0126 00:24:14.611479 5121 generic.go:358] "Generic (PLEG): container finished" podID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerID="05b43dea47e0c77b46d7271cb1850a029cbdd4a3e782cc7ba256d0ee96620c9f" exitCode=0 Jan 26 00:24:14 crc kubenswrapper[5121]: I0126 00:24:14.611580 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerDied","Data":"05b43dea47e0c77b46d7271cb1850a029cbdd4a3e782cc7ba256d0ee96620c9f"} Jan 26 00:24:14 crc kubenswrapper[5121]: I0126 00:24:14.620744 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerStarted","Data":"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518"} Jan 26 00:24:15 crc kubenswrapper[5121]: I0126 00:24:15.714976 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerStarted","Data":"1e60a50bdfb457f48624b0686852bc04ec6db7b17c2d3a1ffd6084db9ea42b95"} Jan 26 00:24:15 crc kubenswrapper[5121]: I0126 00:24:15.743873 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" podStartSLOduration=5.433763421 podStartE2EDuration="6.743851963s" podCreationTimestamp="2026-01-26 00:24:09 +0000 UTC" firstStartedPulling="2026-01-26 00:24:12.486603628 +0000 UTC m=+883.645804753" lastFinishedPulling="2026-01-26 00:24:13.79669217 +0000 UTC m=+884.955893295" observedRunningTime="2026-01-26 00:24:15.741679181 +0000 UTC m=+886.900880316" watchObservedRunningTime="2026-01-26 00:24:15.743851963 +0000 UTC m=+886.903053088" Jan 26 00:24:16 crc kubenswrapper[5121]: I0126 00:24:16.722970 5121 generic.go:358] "Generic (PLEG): container finished" podID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerID="1e60a50bdfb457f48624b0686852bc04ec6db7b17c2d3a1ffd6084db9ea42b95" exitCode=0 Jan 26 00:24:16 crc kubenswrapper[5121]: I0126 00:24:16.723091 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerDied","Data":"1e60a50bdfb457f48624b0686852bc04ec6db7b17c2d3a1ffd6084db9ea42b95"} Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.159880 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.173377 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle\") pod \"f690edc2-1dd5-4fce-81a2-4355eda9213e\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.173432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qvpl\" (UniqueName: \"kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl\") pod \"f690edc2-1dd5-4fce-81a2-4355eda9213e\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.173499 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util\") pod \"f690edc2-1dd5-4fce-81a2-4355eda9213e\" (UID: \"f690edc2-1dd5-4fce-81a2-4355eda9213e\") " Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.176525 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle" (OuterVolumeSpecName: "bundle") pod "f690edc2-1dd5-4fce-81a2-4355eda9213e" (UID: "f690edc2-1dd5-4fce-81a2-4355eda9213e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.184836 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl" (OuterVolumeSpecName: "kube-api-access-5qvpl") pod "f690edc2-1dd5-4fce-81a2-4355eda9213e" (UID: "f690edc2-1dd5-4fce-81a2-4355eda9213e"). InnerVolumeSpecName "kube-api-access-5qvpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.184844 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util" (OuterVolumeSpecName: "util") pod "f690edc2-1dd5-4fce-81a2-4355eda9213e" (UID: "f690edc2-1dd5-4fce-81a2-4355eda9213e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.274552 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5qvpl\" (UniqueName: \"kubernetes.io/projected/f690edc2-1dd5-4fce-81a2-4355eda9213e-kube-api-access-5qvpl\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.274594 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.274606 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f690edc2-1dd5-4fce-81a2-4355eda9213e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.737953 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" event={"ID":"f690edc2-1dd5-4fce-81a2-4355eda9213e","Type":"ContainerDied","Data":"a9ef2efaa84f94bd0543d73eac4bf987e24a7b1025eae454aface698d1a73184"} Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.737992 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ef2efaa84f94bd0543d73eac4bf987e24a7b1025eae454aface698d1a73184" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.737970 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd" Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.741458 5121 generic.go:358] "Generic (PLEG): container finished" podID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerID="7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518" exitCode=0 Jan 26 00:24:18 crc kubenswrapper[5121]: I0126 00:24:18.741719 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerDied","Data":"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518"} Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.541483 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4"] Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542132 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="util" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542146 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="util" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542156 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="pull" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542163 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="pull" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542172 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="extract" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542179 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="extract" Jan 26 00:24:19 crc kubenswrapper[5121]: I0126 00:24:19.542274 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="f690edc2-1dd5-4fce-81a2-4355eda9213e" containerName="extract" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.576819 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerStarted","Data":"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3"} Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.579971 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.584138 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.592936 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4"] Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.605664 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd"] Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.637168 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-29mkz" podStartSLOduration=8.034597671 podStartE2EDuration="8.637152224s" podCreationTimestamp="2026-01-26 00:24:12 +0000 UTC" firstStartedPulling="2026-01-26 00:24:13.496389801 +0000 UTC m=+884.655590926" lastFinishedPulling="2026-01-26 00:24:14.098944364 +0000 UTC m=+885.258145479" observedRunningTime="2026-01-26 00:24:20.636698261 +0000 UTC m=+891.795899406" watchObservedRunningTime="2026-01-26 00:24:20.637152224 +0000 UTC m=+891.796353349" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.679153 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dwqp\" (UniqueName: \"kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.679304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.679362 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.780610 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.781302 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dwqp\" (UniqueName: \"kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.781534 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.782137 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.781121 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.873432 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dwqp\" (UniqueName: \"kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:20 crc kubenswrapper[5121]: I0126 00:24:20.907581 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.065078 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd"] Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.065226 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.089124 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.089416 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.089859 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7lg\" (UniqueName: \"kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.192182 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7lg\" (UniqueName: \"kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.193271 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.193335 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.193435 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.193955 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.209050 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4"] Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.217556 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7lg\" (UniqueName: \"kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.392013 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.773973 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerStarted","Data":"1eccd4f6b0859e99e1ed44f187107c544868b5a821ddef0cfd794b09916c2319"} Jan 26 00:24:21 crc kubenswrapper[5121]: I0126 00:24:21.918534 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd"] Jan 26 00:24:21 crc kubenswrapper[5121]: W0126 00:24:21.930876 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3275426_f8e5_4f1d_9340_1d579ee79d7a.slice/crio-ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7 WatchSource:0}: Error finding container ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7: Status 404 returned error can't find the container with id ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7 Jan 26 00:24:22 crc kubenswrapper[5121]: I0126 00:24:22.814537 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:22 crc kubenswrapper[5121]: I0126 00:24:22.814854 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:22 crc kubenswrapper[5121]: I0126 00:24:22.834238 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerStarted","Data":"ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7"} Jan 26 00:24:23 crc kubenswrapper[5121]: I0126 00:24:23.842125 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerStarted","Data":"d8e4de7024edcf0b0ac5f564e976278c77d444226c44e8a8ab197cfafa411c53"} Jan 26 00:24:23 crc kubenswrapper[5121]: I0126 00:24:23.844781 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerStarted","Data":"56fb5374a3540f18ae28dbe54c832c12d5742e04a3310e4996cf64c8d71ca120"} Jan 26 00:24:24 crc kubenswrapper[5121]: I0126 00:24:24.175718 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29mkz" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" probeResult="failure" output=< Jan 26 00:24:24 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Jan 26 00:24:24 crc kubenswrapper[5121]: > Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.075612 5121 generic.go:358] "Generic (PLEG): container finished" podID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerID="56fb5374a3540f18ae28dbe54c832c12d5742e04a3310e4996cf64c8d71ca120" exitCode=0 Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.075990 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerDied","Data":"56fb5374a3540f18ae28dbe54c832c12d5742e04a3310e4996cf64c8d71ca120"} Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.099070 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerID="d8e4de7024edcf0b0ac5f564e976278c77d444226c44e8a8ab197cfafa411c53" exitCode=0 Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.099271 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerDied","Data":"d8e4de7024edcf0b0ac5f564e976278c77d444226c44e8a8ab197cfafa411c53"} Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.176130 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.784624 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.838032 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.863277 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.863324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.863361 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmfk\" (UniqueName: \"kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.964591 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.964646 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.964815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmfk\" (UniqueName: \"kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.965249 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:25 crc kubenswrapper[5121]: I0126 00:24:25.965310 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.005105 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99"] Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.008791 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmfk\" (UniqueName: \"kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk\") pod \"certified-operators-wm4t9\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.270931 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.765732 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.786341 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.786416 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cnvk\" (UniqueName: \"kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.786448 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.819857 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99"] Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.887396 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.887570 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.887648 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4cnvk\" (UniqueName: \"kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.888001 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:26 crc kubenswrapper[5121]: I0126 00:24:26.888091 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:29 crc kubenswrapper[5121]: I0126 00:24:29.026110 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cnvk\" (UniqueName: \"kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:29 crc kubenswrapper[5121]: I0126 00:24:29.193891 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:24:29 crc kubenswrapper[5121]: I0126 00:24:29.692207 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:24:30 crc kubenswrapper[5121]: I0126 00:24:30.885058 5121 scope.go:117] "RemoveContainer" containerID="080396410d3f9f1b10f9edd791b7580db8f0ce2ff8a0172f6d315d0997af7a4f" Jan 26 00:24:31 crc kubenswrapper[5121]: I0126 00:24:31.813985 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:31 crc kubenswrapper[5121]: I0126 00:24:31.814055 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:32 crc kubenswrapper[5121]: I0126 00:24:32.098080 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerStarted","Data":"fbd1fa7a4c879df8d500e5308f596c8e748c7bd2563f7e82b65340a7c7018daf"} Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.090113 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.094587 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.106542 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.112568 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.115202 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.116589 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.117957 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.120812 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.893850 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99"] Jan 26 00:24:33 crc kubenswrapper[5121]: W0126 00:24:33.909683 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43baf954_9ecd_4111_869d_c5e885c96085.slice/crio-a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685 WatchSource:0}: Error finding container a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685: Status 404 returned error can't find the container with id a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685 Jan 26 00:24:33 crc kubenswrapper[5121]: I0126 00:24:33.937212 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29mkz" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" probeResult="failure" output=< Jan 26 00:24:33 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Jan 26 00:24:33 crc kubenswrapper[5121]: > Jan 26 00:24:34 crc kubenswrapper[5121]: I0126 00:24:34.112927 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerStarted","Data":"a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685"} Jan 26 00:24:34 crc kubenswrapper[5121]: I0126 00:24:34.116272 5121 generic.go:358] "Generic (PLEG): container finished" podID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerID="e3522dcc7956a1968a26d128418ae9342d0169f4ff6ace49b9686f534f08cc89" exitCode=0 Jan 26 00:24:34 crc kubenswrapper[5121]: I0126 00:24:34.116364 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerDied","Data":"e3522dcc7956a1968a26d128418ae9342d0169f4ff6ace49b9686f534f08cc89"} Jan 26 00:24:35 crc kubenswrapper[5121]: I0126 00:24:35.135928 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerStarted","Data":"de201a4d0af01dad269e4782891ae14ce2bdfb99513b78567272b06243a6b4ab"} Jan 26 00:24:35 crc kubenswrapper[5121]: I0126 00:24:35.152649 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerStarted","Data":"173af034603179a958d089670aace3ecb11ce9a7008d9696d12ea45e47777766"} Jan 26 00:24:36 crc kubenswrapper[5121]: I0126 00:24:36.162734 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerID="173af034603179a958d089670aace3ecb11ce9a7008d9696d12ea45e47777766" exitCode=0 Jan 26 00:24:36 crc kubenswrapper[5121]: I0126 00:24:36.162814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerDied","Data":"173af034603179a958d089670aace3ecb11ce9a7008d9696d12ea45e47777766"} Jan 26 00:24:36 crc kubenswrapper[5121]: I0126 00:24:36.809277 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6"] Jan 26 00:24:37 crc kubenswrapper[5121]: I0126 00:24:37.183769 5121 generic.go:358] "Generic (PLEG): container finished" podID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerID="de201a4d0af01dad269e4782891ae14ce2bdfb99513b78567272b06243a6b4ab" exitCode=0 Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.769035 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6"] Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.769128 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p"] Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.770316 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.774046 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.775204 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-k7xzc\"" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.781377 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.853317 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerStarted","Data":"03ba694f2c95007185abd68c6765a348bc5fe32760a5f0948018197bb463eef8"} Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.854306 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.864457 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-x547k\"" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.878266 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.879403 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfs6\" (UniqueName: \"kubernetes.io/projected/60174fba-616c-468e-987d-000b10781865-kube-api-access-hmfs6\") pod \"obo-prometheus-operator-9bc85b4bf-bszz6\" (UID: \"60174fba-616c-468e-987d-000b10781865\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" Jan 26 00:24:38 crc kubenswrapper[5121]: I0126 00:24:38.884400 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m"] Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.008140 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfs6\" (UniqueName: \"kubernetes.io/projected/60174fba-616c-468e-987d-000b10781865-kube-api-access-hmfs6\") pod \"obo-prometheus-operator-9bc85b4bf-bszz6\" (UID: \"60174fba-616c-468e-987d-000b10781865\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.008599 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.016656 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.120738 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.121996 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.140629 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.140629 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p\" (UID: \"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.203089 5121 generic.go:358] "Generic (PLEG): container finished" podID="43baf954-9ecd-4111-869d-c5e885c96085" containerID="03ba694f2c95007185abd68c6765a348bc5fe32760a5f0948018197bb463eef8" exitCode=0 Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.250621 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfs6\" (UniqueName: \"kubernetes.io/projected/60174fba-616c-468e-987d-000b10781865-kube-api-access-hmfs6\") pod \"obo-prometheus-operator-9bc85b4bf-bszz6\" (UID: \"60174fba-616c-468e-987d-000b10781865\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.318703 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" Jan 26 00:24:39 crc kubenswrapper[5121]: I0126 00:24:39.397518 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.227060 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p"] Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.227122 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m"] Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.227137 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6l7zp"] Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.228576 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.281393 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" podStartSLOduration=12.460933013 podStartE2EDuration="20.281362156s" podCreationTimestamp="2026-01-26 00:24:20 +0000 UTC" firstStartedPulling="2026-01-26 00:24:25.106551196 +0000 UTC m=+896.265752321" lastFinishedPulling="2026-01-26 00:24:32.926980339 +0000 UTC m=+904.086181464" observedRunningTime="2026-01-26 00:24:40.278977338 +0000 UTC m=+911.438178453" watchObservedRunningTime="2026-01-26 00:24:40.281362156 +0000 UTC m=+911.440563291" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.314489 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.315358 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.416399 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.416793 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.421613 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.425334 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc26fef-f6c1-40f1-a725-2d56affc8312-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m\" (UID: \"1cc26fef-f6c1-40f1-a725-2d56affc8312\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:40 crc kubenswrapper[5121]: I0126 00:24:40.543369 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.178735 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerDied","Data":"de201a4d0af01dad269e4782891ae14ce2bdfb99513b78567272b06243a6b4ab"} Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.178893 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerStarted","Data":"7fe9e569d9d87218a1e99a340c98c73aa62f82fc2697f27d9ca3078946b94230"} Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.178927 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6l7zp"] Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.178935 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.178948 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-l57mr"] Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.182950 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-szcvt\"" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.183302 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.209250 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a6d80ea-c93d-4421-9b56-386c475b7a5d-observability-operator-tls\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.209548 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfsg\" (UniqueName: \"kubernetes.io/projected/2a6d80ea-c93d-4421-9b56-386c475b7a5d-kube-api-access-hsfsg\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.218660 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerID="7fe9e569d9d87218a1e99a340c98c73aa62f82fc2697f27d9ca3078946b94230" exitCode=0 Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.310546 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a6d80ea-c93d-4421-9b56-386c475b7a5d-observability-operator-tls\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.310648 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfsg\" (UniqueName: \"kubernetes.io/projected/2a6d80ea-c93d-4421-9b56-386c475b7a5d-kube-api-access-hsfsg\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.319897 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a6d80ea-c93d-4421-9b56-386c475b7a5d-observability-operator-tls\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.364816 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfsg\" (UniqueName: \"kubernetes.io/projected/2a6d80ea-c93d-4421-9b56-386c475b7a5d-kube-api-access-hsfsg\") pod \"observability-operator-85c68dddb-6l7zp\" (UID: \"2a6d80ea-c93d-4421-9b56-386c475b7a5d\") " pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.497044 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.953712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerDied","Data":"03ba694f2c95007185abd68c6765a348bc5fe32760a5f0948018197bb463eef8"} Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.953823 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerDied","Data":"7fe9e569d9d87218a1e99a340c98c73aa62f82fc2697f27d9ca3078946b94230"} Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.954426 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.958357 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-b9xnj\"" Jan 26 00:24:41 crc kubenswrapper[5121]: I0126 00:24:41.970020 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-l57mr"] Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.055669 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20c492e3-8db9-46c1-8ccf-83a6b000115e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.056179 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2wp\" (UniqueName: \"kubernetes.io/projected/20c492e3-8db9-46c1-8ccf-83a6b000115e-kube-api-access-wd2wp\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.157453 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20c492e3-8db9-46c1-8ccf-83a6b000115e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.158010 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2wp\" (UniqueName: \"kubernetes.io/projected/20c492e3-8db9-46c1-8ccf-83a6b000115e-kube-api-access-wd2wp\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.159064 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20c492e3-8db9-46c1-8ccf-83a6b000115e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.181740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2wp\" (UniqueName: \"kubernetes.io/projected/20c492e3-8db9-46c1-8ccf-83a6b000115e-kube-api-access-wd2wp\") pod \"perses-operator-669c9f96b5-l57mr\" (UID: \"20c492e3-8db9-46c1-8ccf-83a6b000115e\") " pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.277753 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.559107 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.564327 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle\") pod \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.564392 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl7lg\" (UniqueName: \"kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg\") pod \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.564434 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util\") pod \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\" (UID: \"b3275426-f8e5-4f1d-9340-1d579ee79d7a\") " Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.566249 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle" (OuterVolumeSpecName: "bundle") pod "b3275426-f8e5-4f1d-9340-1d579ee79d7a" (UID: "b3275426-f8e5-4f1d-9340-1d579ee79d7a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.573592 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg" (OuterVolumeSpecName: "kube-api-access-sl7lg") pod "b3275426-f8e5-4f1d-9340-1d579ee79d7a" (UID: "b3275426-f8e5-4f1d-9340-1d579ee79d7a"). InnerVolumeSpecName "kube-api-access-sl7lg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.574540 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util" (OuterVolumeSpecName: "util") pod "b3275426-f8e5-4f1d-9340-1d579ee79d7a" (UID: "b3275426-f8e5-4f1d-9340-1d579ee79d7a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.667810 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.667860 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sl7lg\" (UniqueName: \"kubernetes.io/projected/b3275426-f8e5-4f1d-9340-1d579ee79d7a-kube-api-access-sl7lg\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.667874 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3275426-f8e5-4f1d-9340-1d579ee79d7a-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:42 crc kubenswrapper[5121]: I0126 00:24:42.975204 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.140202 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.261139 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.261601 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd" event={"ID":"b3275426-f8e5-4f1d-9340-1d579ee79d7a","Type":"ContainerDied","Data":"ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7"} Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.264879 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac44f1be3232a8f71ed95457ffe85ad52ce2a950d9eea1239b13aebea2130ef7" Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.481288 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m"] Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.758698 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p"] Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.772561 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-6l7zp"] Jan 26 00:24:43 crc kubenswrapper[5121]: W0126 00:24:43.784097 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6d80ea_c93d_4421_9b56_386c475b7a5d.slice/crio-e5da0ec81f72cdb0922904c999f3c444c765b1dbc31bece0356599f6af014941 WatchSource:0}: Error finding container e5da0ec81f72cdb0922904c999f3c444c765b1dbc31bece0356599f6af014941: Status 404 returned error can't find the container with id e5da0ec81f72cdb0922904c999f3c444c765b1dbc31bece0356599f6af014941 Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.941044 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6"] Jan 26 00:24:43 crc kubenswrapper[5121]: I0126 00:24:43.952325 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-l57mr"] Jan 26 00:24:43 crc kubenswrapper[5121]: W0126 00:24:43.962954 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60174fba_616c_468e_987d_000b10781865.slice/crio-ab3b3a27b6f64278d47eec50baeefbe5d3c2713d4e8ebef531a7e98882d33148 WatchSource:0}: Error finding container ab3b3a27b6f64278d47eec50baeefbe5d3c2713d4e8ebef531a7e98882d33148: Status 404 returned error can't find the container with id ab3b3a27b6f64278d47eec50baeefbe5d3c2713d4e8ebef531a7e98882d33148 Jan 26 00:24:43 crc kubenswrapper[5121]: W0126 00:24:43.963398 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c492e3_8db9_46c1_8ccf_83a6b000115e.slice/crio-7e3ca53df94fecaca81c8c4e0c1d5f708baacf7765bc967ab342b4674ef0de4d WatchSource:0}: Error finding container 7e3ca53df94fecaca81c8c4e0c1d5f708baacf7765bc967ab342b4674ef0de4d: Status 404 returned error can't find the container with id 7e3ca53df94fecaca81c8c4e0c1d5f708baacf7765bc967ab342b4674ef0de4d Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.275852 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.277297 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" event={"ID":"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a","Type":"ContainerStarted","Data":"f96736e285edf872d23c95621a659e66098eb447be9e455a454b5865791c71b2"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.279097 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" event={"ID":"60174fba-616c-468e-987d-000b10781865","Type":"ContainerStarted","Data":"ab3b3a27b6f64278d47eec50baeefbe5d3c2713d4e8ebef531a7e98882d33148"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.280603 5121 generic.go:358] "Generic (PLEG): container finished" podID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerID="e9091da8e7b682a8190be1fe36a920bae729db96d1bc31daef2129ed8603c45b" exitCode=0 Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.280649 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerDied","Data":"e9091da8e7b682a8190be1fe36a920bae729db96d1bc31daef2129ed8603c45b"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.284248 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" event={"ID":"20c492e3-8db9-46c1-8ccf-83a6b000115e","Type":"ContainerStarted","Data":"7e3ca53df94fecaca81c8c4e0c1d5f708baacf7765bc967ab342b4674ef0de4d"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.287561 5121 generic.go:358] "Generic (PLEG): container finished" podID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerID="74574e067088bd6c5f473338f139ed7e478f26532b603791249467825fb0e7fe" exitCode=0 Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.287731 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerDied","Data":"74574e067088bd6c5f473338f139ed7e478f26532b603791249467825fb0e7fe"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.289895 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" event={"ID":"2a6d80ea-c93d-4421-9b56-386c475b7a5d","Type":"ContainerStarted","Data":"e5da0ec81f72cdb0922904c999f3c444c765b1dbc31bece0356599f6af014941"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.294332 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" event={"ID":"1cc26fef-f6c1-40f1-a725-2d56affc8312","Type":"ContainerStarted","Data":"20d21f8d0e5ba972ba375cbe2caebc90608d52801fa5d4ae7160563ed721da61"} Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.314536 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-29mkz" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" containerID="cri-o://28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3" gracePeriod=2 Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.843036 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.940287 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities\") pod \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.940515 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content\") pod \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.940545 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldsvs\" (UniqueName: \"kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs\") pod \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\" (UID: \"5a1738e4-18bd-463f-bf1a-446a306c3f4e\") " Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.941782 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities" (OuterVolumeSpecName: "utilities") pod "5a1738e4-18bd-463f-bf1a-446a306c3f4e" (UID: "5a1738e4-18bd-463f-bf1a-446a306c3f4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:44 crc kubenswrapper[5121]: I0126 00:24:44.947523 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs" (OuterVolumeSpecName: "kube-api-access-ldsvs") pod "5a1738e4-18bd-463f-bf1a-446a306c3f4e" (UID: "5a1738e4-18bd-463f-bf1a-446a306c3f4e"). InnerVolumeSpecName "kube-api-access-ldsvs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.042535 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.042664 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldsvs\" (UniqueName: \"kubernetes.io/projected/5a1738e4-18bd-463f-bf1a-446a306c3f4e-kube-api-access-ldsvs\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.095746 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a1738e4-18bd-463f-bf1a-446a306c3f4e" (UID: "5a1738e4-18bd-463f-bf1a-446a306c3f4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.147369 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a1738e4-18bd-463f-bf1a-446a306c3f4e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.366964 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerStarted","Data":"c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b"} Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.378368 5121 generic.go:358] "Generic (PLEG): container finished" podID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerID="28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3" exitCode=0 Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.379107 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29mkz" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.381245 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerDied","Data":"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3"} Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.381315 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29mkz" event={"ID":"5a1738e4-18bd-463f-bf1a-446a306c3f4e","Type":"ContainerDied","Data":"c9fb06fe19907cbd96fdc1ec8e126508c5618ac28be2384ee1cd62645b1e6c71"} Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.381340 5121 scope.go:117] "RemoveContainer" containerID="28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.389902 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wm4t9" podStartSLOduration=12.929446614 podStartE2EDuration="20.389884499s" podCreationTimestamp="2026-01-26 00:24:25 +0000 UTC" firstStartedPulling="2026-01-26 00:24:35.152037948 +0000 UTC m=+906.311239073" lastFinishedPulling="2026-01-26 00:24:42.612475833 +0000 UTC m=+913.771676958" observedRunningTime="2026-01-26 00:24:45.388197891 +0000 UTC m=+916.547399036" watchObservedRunningTime="2026-01-26 00:24:45.389884499 +0000 UTC m=+916.549085624" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.439941 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.443714 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-29mkz"] Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.510603 5121 scope.go:117] "RemoveContainer" containerID="7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.567957 5121 scope.go:117] "RemoveContainer" containerID="74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.639743 5121 scope.go:117] "RemoveContainer" containerID="28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3" Jan 26 00:24:45 crc kubenswrapper[5121]: E0126 00:24:45.642133 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3\": container with ID starting with 28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3 not found: ID does not exist" containerID="28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.642172 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3"} err="failed to get container status \"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3\": rpc error: code = NotFound desc = could not find container \"28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3\": container with ID starting with 28f842460f937720aad7499b96925976dfc42d04f913302cbfd9fda20897e0d3 not found: ID does not exist" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.642200 5121 scope.go:117] "RemoveContainer" containerID="7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518" Jan 26 00:24:45 crc kubenswrapper[5121]: E0126 00:24:45.644215 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518\": container with ID starting with 7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518 not found: ID does not exist" containerID="7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.644285 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518"} err="failed to get container status \"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518\": rpc error: code = NotFound desc = could not find container \"7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518\": container with ID starting with 7b93bcb6c37d6e4842d0be0e74945bbf7ae855e26795063d233a6d7d10a9e518 not found: ID does not exist" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.644333 5121 scope.go:117] "RemoveContainer" containerID="74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd" Jan 26 00:24:45 crc kubenswrapper[5121]: E0126 00:24:45.647491 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd\": container with ID starting with 74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd not found: ID does not exist" containerID="74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd" Jan 26 00:24:45 crc kubenswrapper[5121]: I0126 00:24:45.647527 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd"} err="failed to get container status \"74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd\": rpc error: code = NotFound desc = could not find container \"74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd\": container with ID starting with 74b3e52f75cba3f1c97faa4e4a6b992e44f8d9b35d31c0508a3367b4d73125bd not found: ID does not exist" Jan 26 00:24:46 crc kubenswrapper[5121]: I0126 00:24:46.274883 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" path="/var/lib/kubelet/pods/5a1738e4-18bd-463f-bf1a-446a306c3f4e/volumes" Jan 26 00:24:46 crc kubenswrapper[5121]: I0126 00:24:46.277443 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:46 crc kubenswrapper[5121]: I0126 00:24:46.277515 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:46 crc kubenswrapper[5121]: I0126 00:24:46.330331 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:57 crc kubenswrapper[5121]: I0126 00:24:57.480656 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:24:57 crc kubenswrapper[5121]: I0126 00:24:57.522813 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:24:57 crc kubenswrapper[5121]: I0126 00:24:57.560355 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wm4t9" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="registry-server" containerID="cri-o://c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" gracePeriod=2 Jan 26 00:25:01 crc kubenswrapper[5121]: I0126 00:25:01.802408 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:25:01 crc kubenswrapper[5121]: I0126 00:25:01.802847 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:25:01 crc kubenswrapper[5121]: I0126 00:25:01.802898 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:25:01 crc kubenswrapper[5121]: I0126 00:25:01.803554 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:25:01 crc kubenswrapper[5121]: I0126 00:25:01.803609 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1" gracePeriod=600 Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.116050 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.194838 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util\") pod \"2b858a05-1513-4bd9-be86-ddabf9c23169\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.195163 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dwqp\" (UniqueName: \"kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp\") pod \"2b858a05-1513-4bd9-be86-ddabf9c23169\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.195199 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle\") pod \"2b858a05-1513-4bd9-be86-ddabf9c23169\" (UID: \"2b858a05-1513-4bd9-be86-ddabf9c23169\") " Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.196529 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle" (OuterVolumeSpecName: "bundle") pod "2b858a05-1513-4bd9-be86-ddabf9c23169" (UID: "2b858a05-1513-4bd9-be86-ddabf9c23169"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.206322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util" (OuterVolumeSpecName: "util") pod "2b858a05-1513-4bd9-be86-ddabf9c23169" (UID: "2b858a05-1513-4bd9-be86-ddabf9c23169"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.215419 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp" (OuterVolumeSpecName: "kube-api-access-2dwqp") pod "2b858a05-1513-4bd9-be86-ddabf9c23169" (UID: "2b858a05-1513-4bd9-be86-ddabf9c23169"). InnerVolumeSpecName "kube-api-access-2dwqp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.296556 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dwqp\" (UniqueName: \"kubernetes.io/projected/2b858a05-1513-4bd9-be86-ddabf9c23169-kube-api-access-2dwqp\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.296967 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.296979 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b858a05-1513-4bd9-be86-ddabf9c23169-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.603311 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.605596 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1" exitCode=0 Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.605782 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1"} Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.605846 5121 scope.go:117] "RemoveContainer" containerID="32fd100fce0d17b0cb1b0932a20894d5150463e8a26ab26138c2ddecc38ffec5" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.608411 5121 generic.go:358] "Generic (PLEG): container finished" podID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerID="c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" exitCode=0 Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.608541 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerDied","Data":"c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b"} Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.611089 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" event={"ID":"2b858a05-1513-4bd9-be86-ddabf9c23169","Type":"ContainerDied","Data":"1eccd4f6b0859e99e1ed44f187107c544868b5a821ddef0cfd794b09916c2319"} Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.611144 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eccd4f6b0859e99e1ed44f187107c544868b5a821ddef0cfd794b09916c2319" Jan 26 00:25:04 crc kubenswrapper[5121]: I0126 00:25:04.611181 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4" Jan 26 00:25:07 crc kubenswrapper[5121]: E0126 00:25:07.432240 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b is running failed: container process not found" containerID="c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:07 crc kubenswrapper[5121]: E0126 00:25:07.433129 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b is running failed: container process not found" containerID="c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:07 crc kubenswrapper[5121]: E0126 00:25:07.433529 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b is running failed: container process not found" containerID="c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:25:07 crc kubenswrapper[5121]: E0126 00:25:07.433569 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-wm4t9" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="registry-server" probeResult="unknown" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.554388 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l9hwl"] Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.555939 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="pull" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.555968 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="pull" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556001 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="extract-utilities" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556014 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="extract-utilities" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556034 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556055 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556076 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="extract-content" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556089 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="extract-content" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556115 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="util" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556127 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="util" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556158 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556166 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556173 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556179 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556197 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="util" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556206 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="util" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556219 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="pull" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556231 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="pull" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556379 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a1738e4-18bd-463f-bf1a-446a306c3f4e" containerName="registry-server" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556405 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="2b858a05-1513-4bd9-be86-ddabf9c23169" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.556425 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3275426-f8e5-4f1d-9340-1d579ee79d7a" containerName="extract" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.580007 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l9hwl"] Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.580174 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.584803 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.585823 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.591151 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-csk6z\"" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.650951 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g5h4\" (UniqueName: \"kubernetes.io/projected/5fecaf04-348e-410b-877e-6f395dd95fd7-kube-api-access-8g5h4\") pod \"interconnect-operator-78b9bd8798-l9hwl\" (UID: \"5fecaf04-348e-410b-877e-6f395dd95fd7\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.753183 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8g5h4\" (UniqueName: \"kubernetes.io/projected/5fecaf04-348e-410b-877e-6f395dd95fd7-kube-api-access-8g5h4\") pod \"interconnect-operator-78b9bd8798-l9hwl\" (UID: \"5fecaf04-348e-410b-877e-6f395dd95fd7\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.789062 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g5h4\" (UniqueName: \"kubernetes.io/projected/5fecaf04-348e-410b-877e-6f395dd95fd7-kube-api-access-8g5h4\") pod \"interconnect-operator-78b9bd8798-l9hwl\" (UID: \"5fecaf04-348e-410b-877e-6f395dd95fd7\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" Jan 26 00:25:10 crc kubenswrapper[5121]: I0126 00:25:10.962375 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.901138 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.970482 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities\") pod \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.970634 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmfk\" (UniqueName: \"kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk\") pod \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.970749 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content\") pod \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\" (UID: \"469fee6d-d73d-4db2-b920-b3e28da4ffe7\") " Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.971835 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities" (OuterVolumeSpecName: "utilities") pod "469fee6d-d73d-4db2-b920-b3e28da4ffe7" (UID: "469fee6d-d73d-4db2-b920-b3e28da4ffe7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:11 crc kubenswrapper[5121]: I0126 00:25:11.977993 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk" (OuterVolumeSpecName: "kube-api-access-cvmfk") pod "469fee6d-d73d-4db2-b920-b3e28da4ffe7" (UID: "469fee6d-d73d-4db2-b920-b3e28da4ffe7"). InnerVolumeSpecName "kube-api-access-cvmfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.001602 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "469fee6d-d73d-4db2-b920-b3e28da4ffe7" (UID: "469fee6d-d73d-4db2-b920-b3e28da4ffe7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.072422 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvmfk\" (UniqueName: \"kubernetes.io/projected/469fee6d-d73d-4db2-b920-b3e28da4ffe7-kube-api-access-cvmfk\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.072815 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.072828 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469fee6d-d73d-4db2-b920-b3e28da4ffe7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.622022 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-65459685df-v5bqx"] Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623212 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="extract-content" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623289 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="extract-content" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623384 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="extract-utilities" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623439 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="extract-utilities" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623505 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="registry-server" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623557 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="registry-server" Jan 26 00:25:12 crc kubenswrapper[5121]: I0126 00:25:12.623724 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" containerName="registry-server" Jan 26 00:25:12 crc kubenswrapper[5121]: W0126 00:25:12.879854 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fecaf04_348e_410b_877e_6f395dd95fd7.slice/crio-56c387ca9036934949d79fa5254c1b8e452b00a54468610ba5fee8f205d0273e WatchSource:0}: Error finding container 56c387ca9036934949d79fa5254c1b8e452b00a54468610ba5fee8f205d0273e: Status 404 returned error can't find the container with id 56c387ca9036934949d79fa5254c1b8e452b00a54468610ba5fee8f205d0273e Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192064 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-65459685df-v5bqx"] Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192113 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192147 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4t9" event={"ID":"469fee6d-d73d-4db2-b920-b3e28da4ffe7","Type":"ContainerDied","Data":"fbd1fa7a4c879df8d500e5308f596c8e748c7bd2563f7e82b65340a7c7018daf"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192169 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l9hwl"] Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192193 5121 scope.go:117] "RemoveContainer" containerID="c869bb03fd7404d7b9f641f890f6c4f771e6e8c41299d3f0ca4e674a7b703f3b" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.192013 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4t9" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.193601 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.195569 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-apiservice-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.195746 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjcd\" (UniqueName: \"kubernetes.io/projected/4527249f-f1a1-4dde-8a58-19d2dd9c9260-kube-api-access-7fjcd\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.195825 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-webhook-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.197163 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-8snkl\"" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.199704 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.262986 5121 scope.go:117] "RemoveContainer" containerID="e9091da8e7b682a8190be1fe36a920bae729db96d1bc31daef2129ed8603c45b" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.269126 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.274627 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wm4t9"] Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.296703 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-apiservice-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.301639 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjcd\" (UniqueName: \"kubernetes.io/projected/4527249f-f1a1-4dde-8a58-19d2dd9c9260-kube-api-access-7fjcd\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.302190 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-webhook-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.307137 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-apiservice-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.307985 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4527249f-f1a1-4dde-8a58-19d2dd9c9260-webhook-cert\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.312989 5121 scope.go:117] "RemoveContainer" containerID="e3522dcc7956a1968a26d128418ae9342d0169f4ff6ace49b9686f534f08cc89" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.330612 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjcd\" (UniqueName: \"kubernetes.io/projected/4527249f-f1a1-4dde-8a58-19d2dd9c9260-kube-api-access-7fjcd\") pod \"elastic-operator-65459685df-v5bqx\" (UID: \"4527249f-f1a1-4dde-8a58-19d2dd9c9260\") " pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.527133 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-65459685df-v5bqx" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.779827 5121 generic.go:358] "Generic (PLEG): container finished" podID="43baf954-9ecd-4111-869d-c5e885c96085" containerID="ace6746da0b85d0879b1bb857af3c7615db394f02e9a0e1eb2e2ab739eaa9389" exitCode=0 Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.779961 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerDied","Data":"ace6746da0b85d0879b1bb857af3c7615db394f02e9a0e1eb2e2ab739eaa9389"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.789499 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" event={"ID":"d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a","Type":"ContainerStarted","Data":"f1362cb98a0f475bf4c4977b2de4d3d1b8b637d2fa982981c027bab843b603f4"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.816929 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" event={"ID":"60174fba-616c-468e-987d-000b10781865","Type":"ContainerStarted","Data":"ccae0bbabdcb9257b5ab02310c6a749501798738454b363d8b81cdc5f514d8d6"} Jan 26 00:25:13 crc kubenswrapper[5121]: W0126 00:25:13.817751 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4527249f_f1a1_4dde_8a58_19d2dd9c9260.slice/crio-96060529595be2d4e4c9361ccdf675478e0226f2ddbaff9350453cb252577f09 WatchSource:0}: Error finding container 96060529595be2d4e4c9361ccdf675478e0226f2ddbaff9350453cb252577f09: Status 404 returned error can't find the container with id 96060529595be2d4e4c9361ccdf675478e0226f2ddbaff9350453cb252577f09 Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.819564 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" event={"ID":"5fecaf04-348e-410b-877e-6f395dd95fd7","Type":"ContainerStarted","Data":"56c387ca9036934949d79fa5254c1b8e452b00a54468610ba5fee8f205d0273e"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.827722 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-65459685df-v5bqx"] Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.831542 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" event={"ID":"20c492e3-8db9-46c1-8ccf-83a6b000115e","Type":"ContainerStarted","Data":"dfaf246df4605430f7a1db201066aaa4566d1599814aa135d5b3fee0d234021d"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.832116 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.855478 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" event={"ID":"2a6d80ea-c93d-4421-9b56-386c475b7a5d","Type":"ContainerStarted","Data":"65fd8343a61d3d8600ae4ddae8257b6c17d0cf33bfd6baeb1b8074c1235f0a74"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.857805 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.865726 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" event={"ID":"1cc26fef-f6c1-40f1-a725-2d56affc8312","Type":"ContainerStarted","Data":"ccbc18c124f4dade2cb97675f7669fb578beb1276279ec9b6a514a5aede467da"} Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.938858 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-bszz6" podStartSLOduration=9.515960607 podStartE2EDuration="37.938814252s" podCreationTimestamp="2026-01-26 00:24:36 +0000 UTC" firstStartedPulling="2026-01-26 00:24:43.966666268 +0000 UTC m=+915.125867383" lastFinishedPulling="2026-01-26 00:25:12.389519913 +0000 UTC m=+943.548721028" observedRunningTime="2026-01-26 00:25:13.857553476 +0000 UTC m=+945.016754621" watchObservedRunningTime="2026-01-26 00:25:13.938814252 +0000 UTC m=+945.098015377" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.947458 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p" podStartSLOduration=9.371221285 podStartE2EDuration="37.947410168s" podCreationTimestamp="2026-01-26 00:24:36 +0000 UTC" firstStartedPulling="2026-01-26 00:24:43.780305094 +0000 UTC m=+914.939506219" lastFinishedPulling="2026-01-26 00:25:12.356493967 +0000 UTC m=+943.515695102" observedRunningTime="2026-01-26 00:25:13.938149793 +0000 UTC m=+945.097350928" watchObservedRunningTime="2026-01-26 00:25:13.947410168 +0000 UTC m=+945.106611303" Jan 26 00:25:13 crc kubenswrapper[5121]: I0126 00:25:13.989250 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" podStartSLOduration=8.420276109 podStartE2EDuration="36.989219115s" podCreationTimestamp="2026-01-26 00:24:37 +0000 UTC" firstStartedPulling="2026-01-26 00:24:43.787470649 +0000 UTC m=+914.946671774" lastFinishedPulling="2026-01-26 00:25:12.356413655 +0000 UTC m=+943.515614780" observedRunningTime="2026-01-26 00:25:13.980049503 +0000 UTC m=+945.139250628" watchObservedRunningTime="2026-01-26 00:25:13.989219115 +0000 UTC m=+945.148420240" Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.012694 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m" podStartSLOduration=9.164139197 podStartE2EDuration="38.012667036s" podCreationTimestamp="2026-01-26 00:24:36 +0000 UTC" firstStartedPulling="2026-01-26 00:24:43.519468107 +0000 UTC m=+914.678669232" lastFinishedPulling="2026-01-26 00:25:12.367995946 +0000 UTC m=+943.527197071" observedRunningTime="2026-01-26 00:25:14.009668741 +0000 UTC m=+945.168869866" watchObservedRunningTime="2026-01-26 00:25:14.012667036 +0000 UTC m=+945.171868171" Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.036598 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" podStartSLOduration=8.650130279 podStartE2EDuration="37.036578661s" podCreationTimestamp="2026-01-26 00:24:37 +0000 UTC" firstStartedPulling="2026-01-26 00:24:43.968861571 +0000 UTC m=+915.128062696" lastFinishedPulling="2026-01-26 00:25:12.355309953 +0000 UTC m=+943.514511078" observedRunningTime="2026-01-26 00:25:14.032278938 +0000 UTC m=+945.191480063" watchObservedRunningTime="2026-01-26 00:25:14.036578661 +0000 UTC m=+945.195779786" Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.265562 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469fee6d-d73d-4db2-b920-b3e28da4ffe7" path="/var/lib/kubelet/pods/469fee6d-d73d-4db2-b920-b3e28da4ffe7/volumes" Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.673345 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-6l7zp" Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.885438 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-65459685df-v5bqx" event={"ID":"4527249f-f1a1-4dde-8a58-19d2dd9c9260","Type":"ContainerStarted","Data":"96060529595be2d4e4c9361ccdf675478e0226f2ddbaff9350453cb252577f09"} Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.891385 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerDied","Data":"b89bd2d7b31c274d9c464aba1f88f75b23d941ad453fd1caa3470df772e6a611"} Jan 26 00:25:14 crc kubenswrapper[5121]: I0126 00:25:14.891233 5121 generic.go:358] "Generic (PLEG): container finished" podID="43baf954-9ecd-4111-869d-c5e885c96085" containerID="b89bd2d7b31c274d9c464aba1f88f75b23d941ad453fd1caa3470df772e6a611" exitCode=0 Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.233245 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.273185 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle\") pod \"43baf954-9ecd-4111-869d-c5e885c96085\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.273341 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util\") pod \"43baf954-9ecd-4111-869d-c5e885c96085\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.273377 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cnvk\" (UniqueName: \"kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk\") pod \"43baf954-9ecd-4111-869d-c5e885c96085\" (UID: \"43baf954-9ecd-4111-869d-c5e885c96085\") " Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.283098 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle" (OuterVolumeSpecName: "bundle") pod "43baf954-9ecd-4111-869d-c5e885c96085" (UID: "43baf954-9ecd-4111-869d-c5e885c96085"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.287732 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk" (OuterVolumeSpecName: "kube-api-access-4cnvk") pod "43baf954-9ecd-4111-869d-c5e885c96085" (UID: "43baf954-9ecd-4111-869d-c5e885c96085"). InnerVolumeSpecName "kube-api-access-4cnvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.288878 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util" (OuterVolumeSpecName: "util") pod "43baf954-9ecd-4111-869d-c5e885c96085" (UID: "43baf954-9ecd-4111-869d-c5e885c96085"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.376653 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.376696 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43baf954-9ecd-4111-869d-c5e885c96085-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.376706 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4cnvk\" (UniqueName: \"kubernetes.io/projected/43baf954-9ecd-4111-869d-c5e885c96085-kube-api-access-4cnvk\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.935347 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" event={"ID":"43baf954-9ecd-4111-869d-c5e885c96085","Type":"ContainerDied","Data":"a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685"} Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.935393 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2391d364f06b81b66d548a269f4ccde5c335b80c19eab369493d61469fd1685" Jan 26 00:25:16 crc kubenswrapper[5121]: I0126 00:25:16.935493 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99" Jan 26 00:25:25 crc kubenswrapper[5121]: I0126 00:25:25.916435 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-l57mr" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.987386 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx"] Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999308 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="pull" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999350 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="pull" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999386 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="util" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999397 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="util" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999413 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="extract" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999420 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="extract" Jan 26 00:25:30 crc kubenswrapper[5121]: I0126 00:25:30.999571 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="43baf954-9ecd-4111-869d-c5e885c96085" containerName="extract" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.026924 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx"] Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.027140 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.036974 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.037272 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-w8m8t\"" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.037425 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.072017 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c4035b9-64f1-4733-ae79-0a771fc7204e-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.072605 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldr7\" (UniqueName: \"kubernetes.io/projected/0c4035b9-64f1-4733-ae79-0a771fc7204e-kube-api-access-bldr7\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.173708 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bldr7\" (UniqueName: \"kubernetes.io/projected/0c4035b9-64f1-4733-ae79-0a771fc7204e-kube-api-access-bldr7\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.173836 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c4035b9-64f1-4733-ae79-0a771fc7204e-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.174468 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c4035b9-64f1-4733-ae79-0a771fc7204e-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.199859 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldr7\" (UniqueName: \"kubernetes.io/projected/0c4035b9-64f1-4733-ae79-0a771fc7204e-kube-api-access-bldr7\") pod \"cert-manager-operator-controller-manager-64c74584c4-dngtx\" (UID: \"0c4035b9-64f1-4733-ae79-0a771fc7204e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.348864 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" Jan 26 00:25:31 crc kubenswrapper[5121]: I0126 00:25:31.390738 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-65459685df-v5bqx" event={"ID":"4527249f-f1a1-4dde-8a58-19d2dd9c9260","Type":"ContainerStarted","Data":"240258a430539a897811e2f4baef0b382cb7ee33cd523d4da952b572243c3248"} Jan 26 00:25:32 crc kubenswrapper[5121]: I0126 00:25:32.279113 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx"] Jan 26 00:25:32 crc kubenswrapper[5121]: W0126 00:25:32.287910 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4035b9_64f1_4733_ae79_0a771fc7204e.slice/crio-e1aee51afd08e9bca3dd721a52921467dd21797db163848652dc07d1383e8cd1 WatchSource:0}: Error finding container e1aee51afd08e9bca3dd721a52921467dd21797db163848652dc07d1383e8cd1: Status 404 returned error can't find the container with id e1aee51afd08e9bca3dd721a52921467dd21797db163848652dc07d1383e8cd1 Jan 26 00:25:32 crc kubenswrapper[5121]: I0126 00:25:32.399083 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" event={"ID":"0c4035b9-64f1-4733-ae79-0a771fc7204e","Type":"ContainerStarted","Data":"e1aee51afd08e9bca3dd721a52921467dd21797db163848652dc07d1383e8cd1"} Jan 26 00:25:32 crc kubenswrapper[5121]: I0126 00:25:32.412794 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" event={"ID":"5fecaf04-348e-410b-877e-6f395dd95fd7","Type":"ContainerStarted","Data":"2d1f2f12c3c2c6863c6c16d4a6f3b4e49217dfcaa985837dd0f6a208ff3eeb98"} Jan 26 00:25:32 crc kubenswrapper[5121]: I0126 00:25:32.437640 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-l9hwl" podStartSLOduration=3.731104457 podStartE2EDuration="22.4376145s" podCreationTimestamp="2026-01-26 00:25:10 +0000 UTC" firstStartedPulling="2026-01-26 00:25:12.88444067 +0000 UTC m=+944.043641795" lastFinishedPulling="2026-01-26 00:25:31.590950713 +0000 UTC m=+962.750151838" observedRunningTime="2026-01-26 00:25:32.43168031 +0000 UTC m=+963.590881455" watchObservedRunningTime="2026-01-26 00:25:32.4376145 +0000 UTC m=+963.596815615" Jan 26 00:25:32 crc kubenswrapper[5121]: I0126 00:25:32.454241 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-65459685df-v5bqx" podStartSLOduration=3.230813738 podStartE2EDuration="20.454213096s" podCreationTimestamp="2026-01-26 00:25:12 +0000 UTC" firstStartedPulling="2026-01-26 00:25:13.858535184 +0000 UTC m=+945.017736299" lastFinishedPulling="2026-01-26 00:25:31.081934532 +0000 UTC m=+962.241135657" observedRunningTime="2026-01-26 00:25:32.449678516 +0000 UTC m=+963.608879671" watchObservedRunningTime="2026-01-26 00:25:32.454213096 +0000 UTC m=+963.613414221" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.154607 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.953066 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.953257 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.963435 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.963956 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964003 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964212 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964286 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-gkrmd\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964393 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.964744 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 26 00:25:33 crc kubenswrapper[5121]: I0126 00:25:33.969041 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041073 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041134 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041227 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041281 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041452 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041547 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041576 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041604 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041640 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041663 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041711 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041792 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041847 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.041878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.042068 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.143620 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.144401 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145112 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.143753 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145324 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145374 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145425 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145471 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145571 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145620 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145653 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145728 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145785 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145840 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.145872 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.146359 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.146433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.146850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.147237 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.148035 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.149100 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.149699 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.154358 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.155388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.157540 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.159179 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.171726 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.173137 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.175370 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/b7b0a37f-32ac-4e4e-bdd2-4139d54903b6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:34 crc kubenswrapper[5121]: I0126 00:25:34.279308 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:25:35 crc kubenswrapper[5121]: I0126 00:25:35.675285 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:25:36 crc kubenswrapper[5121]: I0126 00:25:36.463017 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerStarted","Data":"16491ca39b9c7451dddc679407277d5e1f2e12569e9e0adea36795e3ffea5650"} Jan 26 00:25:57 crc kubenswrapper[5121]: I0126 00:25:57.483443 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.512198 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.571829 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.571908 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.861747 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.861824 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.861892 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhl5\" (UniqueName: \"kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.963807 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.963919 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.963994 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhl5\" (UniqueName: \"kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.964706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.965153 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:58 crc kubenswrapper[5121]: I0126 00:25:58.995982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhl5\" (UniqueName: \"kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5\") pod \"community-operators-rfx9h\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.005936 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.006215 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.010516 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.010547 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.010522 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-n9bc6\"" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.012870 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.077579 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.077801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.077918 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.077941 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.077980 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078008 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrxr\" (UniqueName: \"kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078030 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078181 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078254 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078311 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078355 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.078433 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.172235 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288546 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288624 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288643 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288748 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288903 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.288984 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrxr\" (UniqueName: \"kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289016 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289117 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289261 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289310 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289557 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289683 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289712 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.289891 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.291021 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.291282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.291446 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.293545 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.293743 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.306547 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.307300 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.320843 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrxr\" (UniqueName: \"kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr\") pod \"service-telemetry-operator-1-build\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:25:59 crc kubenswrapper[5121]: I0126 00:25:59.345190 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.162661 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489786-ghsf9"] Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.236028 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-ghsf9"] Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.236192 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.241081 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.241391 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.242268 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.403596 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmlv\" (UniqueName: \"kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv\") pod \"auto-csr-approver-29489786-ghsf9\" (UID: \"34987d5f-649b-444c-a15e-482e13593729\") " pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.506359 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmlv\" (UniqueName: \"kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv\") pod \"auto-csr-approver-29489786-ghsf9\" (UID: \"34987d5f-649b-444c-a15e-482e13593729\") " pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.535740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmlv\" (UniqueName: \"kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv\") pod \"auto-csr-approver-29489786-ghsf9\" (UID: \"34987d5f-649b-444c-a15e-482e13593729\") " pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:00 crc kubenswrapper[5121]: I0126 00:26:00.561867 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:08 crc kubenswrapper[5121]: I0126 00:26:08.911147 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:11 crc kubenswrapper[5121]: I0126 00:26:11.663487 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:26:12 crc kubenswrapper[5121]: I0126 00:26:12.858256 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:12 crc kubenswrapper[5121]: I0126 00:26:12.864024 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 26 00:26:12 crc kubenswrapper[5121]: I0126 00:26:12.864273 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 26 00:26:12 crc kubenswrapper[5121]: I0126 00:26:12.866410 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 26 00:26:12 crc kubenswrapper[5121]: I0126 00:26:12.872436 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012662 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012748 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88htm\" (UniqueName: \"kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012792 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012831 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012857 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012908 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012949 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012972 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.012991 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.013015 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.013060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.013082 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115028 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115125 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115149 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115173 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115345 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115410 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-88htm\" (UniqueName: \"kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115385 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115674 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115739 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115891 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.116016 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.115977 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.116271 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.117058 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.117079 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.117128 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.120251 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.120377 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.127285 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.127665 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.138217 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-88htm\" (UniqueName: \"kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm\") pod \"service-telemetry-operator-2-build\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:13 crc kubenswrapper[5121]: I0126 00:26:13.198097 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:26:17 crc kubenswrapper[5121]: I0126 00:26:17.935240 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.179394 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:18 crc kubenswrapper[5121]: W0126 00:26:18.189636 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad1d4de_99d3_480a_b6fd_bad440e6bf75.slice/crio-2cc42f0ef65990d8bca37a1ea661ff79ec3f1708ec8c40275bad3ac636da2fa0 WatchSource:0}: Error finding container 2cc42f0ef65990d8bca37a1ea661ff79ec3f1708ec8c40275bad3ac636da2fa0: Status 404 returned error can't find the container with id 2cc42f0ef65990d8bca37a1ea661ff79ec3f1708ec8c40275bad3ac636da2fa0 Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.238210 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-ghsf9"] Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.384795 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.612868 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" event={"ID":"0c4035b9-64f1-4733-ae79-0a771fc7204e","Type":"ContainerStarted","Data":"b5619d4c2e876fca4105f8d76cb8d54557d872a2524a138a4640097a6543151f"} Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.614788 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dad1d4de-99d3-480a-b6fd-bad440e6bf75","Type":"ContainerStarted","Data":"2cc42f0ef65990d8bca37a1ea661ff79ec3f1708ec8c40275bad3ac636da2fa0"} Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.617545 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerStarted","Data":"4a1bd2c342b23ce77bf4a7ae4893c468553cf9447a92585f8e545a45a0e26599"} Jan 26 00:26:18 crc kubenswrapper[5121]: I0126 00:26:18.617593 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerStarted","Data":"6a666ce454c92cf6d39fc98d8eef8b9fda2d7a18563553b93b7dc71aa9b6289e"} Jan 26 00:26:19 crc kubenswrapper[5121]: I0126 00:26:19.626285 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerStarted","Data":"557b4807447a79dc4010417b8aa58553a7a3a2117a54b1ab2c7bacf9ec604cf8"} Jan 26 00:26:19 crc kubenswrapper[5121]: I0126 00:26:19.629428 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" event={"ID":"34987d5f-649b-444c-a15e-482e13593729","Type":"ContainerStarted","Data":"bd6f32331a61505bb6a12ea53f78a8d5c5a371d42a2bad03c6b2d32a1341bd48"} Jan 26 00:26:19 crc kubenswrapper[5121]: I0126 00:26:19.632855 5121 generic.go:358] "Generic (PLEG): container finished" podID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerID="4a1bd2c342b23ce77bf4a7ae4893c468553cf9447a92585f8e545a45a0e26599" exitCode=0 Jan 26 00:26:19 crc kubenswrapper[5121]: I0126 00:26:19.632908 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerDied","Data":"4a1bd2c342b23ce77bf4a7ae4893c468553cf9447a92585f8e545a45a0e26599"} Jan 26 00:26:19 crc kubenswrapper[5121]: I0126 00:26:19.769108 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-dngtx" podStartSLOduration=5.51346914 podStartE2EDuration="49.769085445s" podCreationTimestamp="2026-01-26 00:25:30 +0000 UTC" firstStartedPulling="2026-01-26 00:25:32.292090875 +0000 UTC m=+963.451292000" lastFinishedPulling="2026-01-26 00:26:16.54770718 +0000 UTC m=+1007.706908305" observedRunningTime="2026-01-26 00:26:19.76752828 +0000 UTC m=+1010.926729415" watchObservedRunningTime="2026-01-26 00:26:19.769085445 +0000 UTC m=+1010.928286570" Jan 26 00:26:20 crc kubenswrapper[5121]: I0126 00:26:20.661300 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerStarted","Data":"f32d2c3db83d0a543c0d0007490957a091ae4228f37c4600133d3f7f21e1043c"} Jan 26 00:26:20 crc kubenswrapper[5121]: I0126 00:26:20.832487 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:26:20 crc kubenswrapper[5121]: I0126 00:26:20.870469 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:26:22 crc kubenswrapper[5121]: I0126 00:26:22.679587 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerStarted","Data":"9b599236031bf205b02a133df96dbc8f1fda96fd1b4daf48f362c40dfc395093"} Jan 26 00:26:22 crc kubenswrapper[5121]: I0126 00:26:22.685380 5121 generic.go:358] "Generic (PLEG): container finished" podID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerID="f32d2c3db83d0a543c0d0007490957a091ae4228f37c4600133d3f7f21e1043c" exitCode=0 Jan 26 00:26:22 crc kubenswrapper[5121]: I0126 00:26:22.685500 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerDied","Data":"f32d2c3db83d0a543c0d0007490957a091ae4228f37c4600133d3f7f21e1043c"} Jan 26 00:26:22 crc kubenswrapper[5121]: I0126 00:26:22.824256 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-cddv6"] Jan 26 00:26:24 crc kubenswrapper[5121]: I0126 00:26:24.706203 5121 generic.go:358] "Generic (PLEG): container finished" podID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerID="9b599236031bf205b02a133df96dbc8f1fda96fd1b4daf48f362c40dfc395093" exitCode=0 Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.123619 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-cddv6"] Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.123665 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672"] Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.123897 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.129823 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.130140 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.130820 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-4bcn5\"" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.231589 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcxz\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-kube-api-access-phcxz\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.231909 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.260716 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerDied","Data":"9b599236031bf205b02a133df96dbc8f1fda96fd1b4daf48f362c40dfc395093"} Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.260795 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672"] Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.260948 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.263587 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-2fvzw\"" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.334795 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ztf\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-kube-api-access-59ztf\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.334949 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.334974 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.335043 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phcxz\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-kube-api-access-phcxz\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.362731 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.362881 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcxz\" (UniqueName: \"kubernetes.io/projected/68b12486-8042-4570-bd05-6bb6664c0a2c-kube-api-access-phcxz\") pod \"cert-manager-webhook-7894b5b9b4-cddv6\" (UID: \"68b12486-8042-4570-bd05-6bb6664c0a2c\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.436867 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.437010 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59ztf\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-kube-api-access-59ztf\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.456314 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.457634 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ztf\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-kube-api-access-59ztf\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.459347 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03fe3312-4dbf-42da-bf01-4f541b24d3df-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-f7672\" (UID: \"03fe3312-4dbf-42da-bf01-4f541b24d3df\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.580732 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" Jan 26 00:26:25 crc kubenswrapper[5121]: I0126 00:26:25.718054 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerStarted","Data":"147065d04217e4465d7b3d99ee26315c3746af260b52628cfb107ed349eaf922"} Jan 26 00:26:26 crc kubenswrapper[5121]: I0126 00:26:26.565557 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" event={"ID":"34987d5f-649b-444c-a15e-482e13593729","Type":"ContainerStarted","Data":"d7101112d14185db154a66a8852229eb8522dcdf484dccbe76bbfa23cda91564"} Jan 26 00:26:26 crc kubenswrapper[5121]: I0126 00:26:26.582071 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" podStartSLOduration=24.285410499 podStartE2EDuration="26.582045283s" podCreationTimestamp="2026-01-26 00:26:00 +0000 UTC" firstStartedPulling="2026-01-26 00:26:19.335938645 +0000 UTC m=+1010.495139770" lastFinishedPulling="2026-01-26 00:26:21.632573429 +0000 UTC m=+1012.791774554" observedRunningTime="2026-01-26 00:26:26.576986188 +0000 UTC m=+1017.736187323" watchObservedRunningTime="2026-01-26 00:26:26.582045283 +0000 UTC m=+1017.741246408" Jan 26 00:26:27 crc kubenswrapper[5121]: I0126 00:26:27.541590 5121 generic.go:358] "Generic (PLEG): container finished" podID="34987d5f-649b-444c-a15e-482e13593729" containerID="d7101112d14185db154a66a8852229eb8522dcdf484dccbe76bbfa23cda91564" exitCode=0 Jan 26 00:26:27 crc kubenswrapper[5121]: I0126 00:26:27.542076 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" event={"ID":"34987d5f-649b-444c-a15e-482e13593729","Type":"ContainerDied","Data":"d7101112d14185db154a66a8852229eb8522dcdf484dccbe76bbfa23cda91564"} Jan 26 00:26:27 crc kubenswrapper[5121]: I0126 00:26:27.560962 5121 generic.go:358] "Generic (PLEG): container finished" podID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerID="147065d04217e4465d7b3d99ee26315c3746af260b52628cfb107ed349eaf922" exitCode=0 Jan 26 00:26:27 crc kubenswrapper[5121]: I0126 00:26:27.562086 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerDied","Data":"147065d04217e4465d7b3d99ee26315c3746af260b52628cfb107ed349eaf922"} Jan 26 00:26:41 crc kubenswrapper[5121]: I0126 00:26:41.738641 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-fl77x"] Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.758248 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.762021 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-scs7p\"" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.765878 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-fl77x"] Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.797328 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4cl\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-kube-api-access-6b4cl\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.797732 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-bound-sa-token\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.899190 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4cl\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-kube-api-access-6b4cl\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.899306 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-bound-sa-token\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.919477 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4cl\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-kube-api-access-6b4cl\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:43 crc kubenswrapper[5121]: I0126 00:26:43.921842 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4-bound-sa-token\") pod \"cert-manager-858d87f86b-fl77x\" (UID: \"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4\") " pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:44 crc kubenswrapper[5121]: I0126 00:26:44.078871 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-fl77x" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.074330 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.251423 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrmlv\" (UniqueName: \"kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv\") pod \"34987d5f-649b-444c-a15e-482e13593729\" (UID: \"34987d5f-649b-444c-a15e-482e13593729\") " Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.261617 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv" (OuterVolumeSpecName: "kube-api-access-rrmlv") pod "34987d5f-649b-444c-a15e-482e13593729" (UID: "34987d5f-649b-444c-a15e-482e13593729"). InnerVolumeSpecName "kube-api-access-rrmlv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.353738 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrmlv\" (UniqueName: \"kubernetes.io/projected/34987d5f-649b-444c-a15e-482e13593729-kube-api-access-rrmlv\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.511829 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672"] Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.552259 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-fl77x"] Jan 26 00:26:47 crc kubenswrapper[5121]: W0126 00:26:47.564916 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce8c85de_8ddd_4eb6_8dbd_3e42dc4031c4.slice/crio-ea8a79019f42edb0a6f47aeac7ff5725de314ee597bcef1866aa3a9ff436a1a4 WatchSource:0}: Error finding container ea8a79019f42edb0a6f47aeac7ff5725de314ee597bcef1866aa3a9ff436a1a4: Status 404 returned error can't find the container with id ea8a79019f42edb0a6f47aeac7ff5725de314ee597bcef1866aa3a9ff436a1a4 Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.607819 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-cddv6"] Jan 26 00:26:47 crc kubenswrapper[5121]: W0126 00:26:47.622137 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b12486_8042_4570_bd05_6bb6664c0a2c.slice/crio-8ac676fb74bdf6c149b0ab1b63478145bcb97a2adb5a94310788a4a242b73cf1 WatchSource:0}: Error finding container 8ac676fb74bdf6c149b0ab1b63478145bcb97a2adb5a94310788a4a242b73cf1: Status 404 returned error can't find the container with id 8ac676fb74bdf6c149b0ab1b63478145bcb97a2adb5a94310788a4a242b73cf1 Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.762348 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-fl77x" event={"ID":"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4","Type":"ContainerStarted","Data":"ea8a79019f42edb0a6f47aeac7ff5725de314ee597bcef1866aa3a9ff436a1a4"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.769776 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"b7b0a37f-32ac-4e4e-bdd2-4139d54903b6","Type":"ContainerStarted","Data":"73276d9a8a8da19ed950a75fdea730e502029e0b08418c53eba2e657797c33c4"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.771454 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" event={"ID":"68b12486-8042-4570-bd05-6bb6664c0a2c","Type":"ContainerStarted","Data":"8ac676fb74bdf6c149b0ab1b63478145bcb97a2adb5a94310788a4a242b73cf1"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.774373 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" event={"ID":"03fe3312-4dbf-42da-bf01-4f541b24d3df","Type":"ContainerStarted","Data":"d4585578003bb55fb8d1867d4fc28fc88ee483e7d0785e4fdc7d0add8ff45a81"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.776517 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.777412 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.778024 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489786-ghsf9" event={"ID":"34987d5f-649b-444c-a15e-482e13593729","Type":"ContainerDied","Data":"bd6f32331a61505bb6a12ea53f78a8d5c5a371d42a2bad03c6b2d32a1341bd48"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.778055 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6f32331a61505bb6a12ea53f78a8d5c5a371d42a2bad03c6b2d32a1341bd48" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.784449 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerStarted","Data":"fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e"} Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.818811 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=31.104446767 podStartE2EDuration="1m14.818785928s" podCreationTimestamp="2026-01-26 00:25:33 +0000 UTC" firstStartedPulling="2026-01-26 00:25:35.731150331 +0000 UTC m=+966.890351456" lastFinishedPulling="2026-01-26 00:26:19.445489492 +0000 UTC m=+1010.604690617" observedRunningTime="2026-01-26 00:26:47.810660506 +0000 UTC m=+1038.969861631" watchObservedRunningTime="2026-01-26 00:26:47.818785928 +0000 UTC m=+1038.977987063" Jan 26 00:26:47 crc kubenswrapper[5121]: I0126 00:26:47.839803 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rfx9h" podStartSLOduration=49.024441484 podStartE2EDuration="50.839777619s" podCreationTimestamp="2026-01-26 00:25:57 +0000 UTC" firstStartedPulling="2026-01-26 00:26:19.634281036 +0000 UTC m=+1010.793482161" lastFinishedPulling="2026-01-26 00:26:21.449617171 +0000 UTC m=+1012.608818296" observedRunningTime="2026-01-26 00:26:47.836005691 +0000 UTC m=+1038.995206816" watchObservedRunningTime="2026-01-26 00:26:47.839777619 +0000 UTC m=+1038.998978744" Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.175030 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-fdl6m"] Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.202555 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489780-fdl6m"] Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.265620 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c993c292-35cf-45f2-8be9-beb81e25a150" path="/var/lib/kubelet/pods/c993c292-35cf-45f2-8be9-beb81e25a150/volumes" Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.794379 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerStarted","Data":"a37b8c3a37c2d7458b9a63aa132c8ce72cbd5d8bdcc13e0baa27476d27ef9e0c"} Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.797499 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" containerName="manage-dockerfile" containerID="cri-o://004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c" gracePeriod=30 Jan 26 00:26:48 crc kubenswrapper[5121]: I0126 00:26:48.797776 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dad1d4de-99d3-480a-b6fd-bad440e6bf75","Type":"ContainerStarted","Data":"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c"} Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.172741 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.172935 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.475318 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_dad1d4de-99d3-480a-b6fd-bad440e6bf75/manage-dockerfile/0.log" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.475418 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625006 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625100 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625164 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625196 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625209 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625222 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625292 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625350 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625386 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625447 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625464 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625490 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625616 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625697 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.625974 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.626045 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.626264 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.626204 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.627896 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrxr\" (UniqueName: \"kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.628352 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull\") pod \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\" (UID: \"dad1d4de-99d3-480a-b6fd-bad440e6bf75\") " Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.627993 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629353 5121 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629406 5121 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629444 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629458 5121 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629470 5121 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dad1d4de-99d3-480a-b6fd-bad440e6bf75-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629489 5121 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629502 5121 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dad1d4de-99d3-480a-b6fd-bad440e6bf75-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629513 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.629525 5121 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dad1d4de-99d3-480a-b6fd-bad440e6bf75-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.632725 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-pull") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "builder-dockercfg-n9bc6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.652600 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-push") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "builder-dockercfg-n9bc6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.653221 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr" (OuterVolumeSpecName: "kube-api-access-pdrxr") pod "dad1d4de-99d3-480a-b6fd-bad440e6bf75" (UID: "dad1d4de-99d3-480a-b6fd-bad440e6bf75"). InnerVolumeSpecName "kube-api-access-pdrxr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.730651 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.730697 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdrxr\" (UniqueName: \"kubernetes.io/projected/dad1d4de-99d3-480a-b6fd-bad440e6bf75-kube-api-access-pdrxr\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.730706 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/dad1d4de-99d3-480a-b6fd-bad440e6bf75-builder-dockercfg-n9bc6-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.813306 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_dad1d4de-99d3-480a-b6fd-bad440e6bf75/manage-dockerfile/0.log" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.813354 5121 generic.go:358] "Generic (PLEG): container finished" podID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" containerID="004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c" exitCode=1 Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.814398 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dad1d4de-99d3-480a-b6fd-bad440e6bf75","Type":"ContainerDied","Data":"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c"} Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.814471 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"dad1d4de-99d3-480a-b6fd-bad440e6bf75","Type":"ContainerDied","Data":"2cc42f0ef65990d8bca37a1ea661ff79ec3f1708ec8c40275bad3ac636da2fa0"} Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.814494 5121 scope.go:117] "RemoveContainer" containerID="004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.814788 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.863601 5121 scope.go:117] "RemoveContainer" containerID="004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c" Jan 26 00:26:49 crc kubenswrapper[5121]: E0126 00:26:49.864581 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c\": container with ID starting with 004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c not found: ID does not exist" containerID="004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.864615 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c"} err="failed to get container status \"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c\": rpc error: code = NotFound desc = could not find container \"004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c\": container with ID starting with 004fddb41ec79f71ef3073f6d899dfc7bfbc10d6f9a4d7aa00a22d04f8b8816c not found: ID does not exist" Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.871079 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:49 crc kubenswrapper[5121]: I0126 00:26:49.880573 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:26:50 crc kubenswrapper[5121]: I0126 00:26:50.271657 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" path="/var/lib/kubelet/pods/dad1d4de-99d3-480a-b6fd-bad440e6bf75/volumes" Jan 26 00:26:50 crc kubenswrapper[5121]: I0126 00:26:50.487720 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rfx9h" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" probeResult="failure" output=< Jan 26 00:26:50 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Jan 26 00:26:50 crc kubenswrapper[5121]: > Jan 26 00:26:58 crc kubenswrapper[5121]: I0126 00:26:58.899776 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:26:58 crc kubenswrapper[5121]: {"timestamp": "2026-01-26T00:26:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:26:58 crc kubenswrapper[5121]: > Jan 26 00:26:59 crc kubenswrapper[5121]: I0126 00:26:59.316540 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:26:59 crc kubenswrapper[5121]: I0126 00:26:59.375706 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:26:59 crc kubenswrapper[5121]: I0126 00:26:59.561358 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:27:00 crc kubenswrapper[5121]: I0126 00:27:00.791349 5121 generic.go:358] "Generic (PLEG): container finished" podID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerID="a37b8c3a37c2d7458b9a63aa132c8ce72cbd5d8bdcc13e0baa27476d27ef9e0c" exitCode=0 Jan 26 00:27:00 crc kubenswrapper[5121]: I0126 00:27:00.792885 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerDied","Data":"a37b8c3a37c2d7458b9a63aa132c8ce72cbd5d8bdcc13e0baa27476d27ef9e0c"} Jan 26 00:27:01 crc kubenswrapper[5121]: I0126 00:27:01.808831 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rfx9h" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" containerID="cri-o://fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" gracePeriod=2 Jan 26 00:27:02 crc kubenswrapper[5121]: I0126 00:27:02.819967 5121 generic.go:358] "Generic (PLEG): container finished" podID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerID="fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" exitCode=0 Jan 26 00:27:02 crc kubenswrapper[5121]: I0126 00:27:02.820072 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerDied","Data":"fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e"} Jan 26 00:27:03 crc kubenswrapper[5121]: I0126 00:27:03.880027 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:27:03 crc kubenswrapper[5121]: {"timestamp": "2026-01-26T00:27:03+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:27:03 crc kubenswrapper[5121]: > Jan 26 00:27:08 crc kubenswrapper[5121]: I0126 00:27:08.959258 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:27:08 crc kubenswrapper[5121]: {"timestamp": "2026-01-26T00:27:08+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:27:08 crc kubenswrapper[5121]: > Jan 26 00:27:09 crc kubenswrapper[5121]: E0126 00:27:09.319730 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e is running failed: container process not found" containerID="fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:27:09 crc kubenswrapper[5121]: E0126 00:27:09.321131 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e is running failed: container process not found" containerID="fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:27:09 crc kubenswrapper[5121]: E0126 00:27:09.321688 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e is running failed: container process not found" containerID="fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:27:09 crc kubenswrapper[5121]: E0126 00:27:09.321790 5121 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rfx9h" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" probeResult="unknown" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.395986 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.706324 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhl5\" (UniqueName: \"kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5\") pod \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.706831 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities\") pod \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.706909 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content\") pod \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\" (UID: \"1cdc1b28-c53b-411c-8056-5bce55d60e1d\") " Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.709055 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities" (OuterVolumeSpecName: "utilities") pod "1cdc1b28-c53b-411c-8056-5bce55d60e1d" (UID: "1cdc1b28-c53b-411c-8056-5bce55d60e1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.738164 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5" (OuterVolumeSpecName: "kube-api-access-lmhl5") pod "1cdc1b28-c53b-411c-8056-5bce55d60e1d" (UID: "1cdc1b28-c53b-411c-8056-5bce55d60e1d"). InnerVolumeSpecName "kube-api-access-lmhl5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.785013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cdc1b28-c53b-411c-8056-5bce55d60e1d" (UID: "1cdc1b28-c53b-411c-8056-5bce55d60e1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.808976 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.809029 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdc1b28-c53b-411c-8056-5bce55d60e1d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.809041 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmhl5\" (UniqueName: \"kubernetes.io/projected/1cdc1b28-c53b-411c-8056-5bce55d60e1d-kube-api-access-lmhl5\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.883698 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfx9h" event={"ID":"1cdc1b28-c53b-411c-8056-5bce55d60e1d","Type":"ContainerDied","Data":"6a666ce454c92cf6d39fc98d8eef8b9fda2d7a18563553b93b7dc71aa9b6289e"} Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.883775 5121 scope.go:117] "RemoveContainer" containerID="fa4c5e1aba815013be90d47898151dca8aacf2c041ed7631b39c3d7bab70fd2e" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.883944 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfx9h" Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.924354 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:27:09 crc kubenswrapper[5121]: I0126 00:27:09.932153 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rfx9h"] Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.264254 5121 scope.go:117] "RemoveContainer" containerID="9b599236031bf205b02a133df96dbc8f1fda96fd1b4daf48f362c40dfc395093" Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.266497 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" path="/var/lib/kubelet/pods/1cdc1b28-c53b-411c-8056-5bce55d60e1d/volumes" Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.301965 5121 scope.go:117] "RemoveContainer" containerID="4a1bd2c342b23ce77bf4a7ae4893c468553cf9447a92585f8e545a45a0e26599" Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.907579 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-fl77x" event={"ID":"ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4","Type":"ContainerStarted","Data":"07b1dec8df8daf99a7a9448b221f3b13c1e6c571b1f733ca9123ab057472940c"} Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.909502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" event={"ID":"68b12486-8042-4570-bd05-6bb6664c0a2c","Type":"ContainerStarted","Data":"7e466b423bdff2189807c369d0347b5a47befa7ee6830b1f612db7dccb362827"} Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.909607 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.912681 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" event={"ID":"03fe3312-4dbf-42da-bf01-4f541b24d3df","Type":"ContainerStarted","Data":"a0d01cac5946dc422010e3caa709cca50caf65b7743078bfeb50111ee7317e52"} Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.915382 5121 generic.go:358] "Generic (PLEG): container finished" podID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerID="c269a4534274e7f9c9a86b24fd4359265d9327d75915e6d33fcb29f8e27b1006" exitCode=0 Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.915465 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerDied","Data":"c269a4534274e7f9c9a86b24fd4359265d9327d75915e6d33fcb29f8e27b1006"} Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.946657 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-fl77x" podStartSLOduration=7.059237931 podStartE2EDuration="29.946632648s" podCreationTimestamp="2026-01-26 00:26:41 +0000 UTC" firstStartedPulling="2026-01-26 00:26:47.587889728 +0000 UTC m=+1038.747090853" lastFinishedPulling="2026-01-26 00:27:10.475284445 +0000 UTC m=+1061.634485570" observedRunningTime="2026-01-26 00:27:10.942307144 +0000 UTC m=+1062.101508269" watchObservedRunningTime="2026-01-26 00:27:10.946632648 +0000 UTC m=+1062.105833773" Jan 26 00:27:10 crc kubenswrapper[5121]: I0126 00:27:10.973888 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" podStartSLOduration=26.337120516 podStartE2EDuration="48.973865087s" podCreationTimestamp="2026-01-26 00:26:22 +0000 UTC" firstStartedPulling="2026-01-26 00:26:47.74723072 +0000 UTC m=+1038.906431845" lastFinishedPulling="2026-01-26 00:27:10.383975291 +0000 UTC m=+1061.543176416" observedRunningTime="2026-01-26 00:27:10.972082336 +0000 UTC m=+1062.131283461" watchObservedRunningTime="2026-01-26 00:27:10.973865087 +0000 UTC m=+1062.133066232" Jan 26 00:27:11 crc kubenswrapper[5121]: I0126 00:27:11.001912 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-f7672" podStartSLOduration=25.213052164 podStartE2EDuration="48.001880499s" podCreationTimestamp="2026-01-26 00:26:23 +0000 UTC" firstStartedPulling="2026-01-26 00:26:47.587483457 +0000 UTC m=+1038.746684572" lastFinishedPulling="2026-01-26 00:27:10.376311782 +0000 UTC m=+1061.535512907" observedRunningTime="2026-01-26 00:27:10.990625887 +0000 UTC m=+1062.149827012" watchObservedRunningTime="2026-01-26 00:27:11.001880499 +0000 UTC m=+1062.161081634" Jan 26 00:27:11 crc kubenswrapper[5121]: I0126 00:27:11.101209 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/manage-dockerfile/0.log" Jan 26 00:27:13 crc kubenswrapper[5121]: I0126 00:27:13.875886 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:27:13 crc kubenswrapper[5121]: {"timestamp": "2026-01-26T00:27:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:27:13 crc kubenswrapper[5121]: > Jan 26 00:27:13 crc kubenswrapper[5121]: I0126 00:27:13.944235 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerStarted","Data":"fad7d787296fb0d677938a475081c9b187cfe2dd1566ca951e058ad3f8851977"} Jan 26 00:27:14 crc kubenswrapper[5121]: I0126 00:27:14.983364 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=35.592196344 podStartE2EDuration="1m3.983341972s" podCreationTimestamp="2026-01-26 00:26:11 +0000 UTC" firstStartedPulling="2026-01-26 00:26:19.378436832 +0000 UTC m=+1010.537637957" lastFinishedPulling="2026-01-26 00:26:47.76958246 +0000 UTC m=+1038.928783585" observedRunningTime="2026-01-26 00:27:14.978156523 +0000 UTC m=+1066.137357648" watchObservedRunningTime="2026-01-26 00:27:14.983341972 +0000 UTC m=+1066.142543097" Jan 26 00:27:16 crc kubenswrapper[5121]: I0126 00:27:16.926886 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-cddv6" Jan 26 00:27:18 crc kubenswrapper[5121]: I0126 00:27:18.864248 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="b7b0a37f-32ac-4e4e-bdd2-4139d54903b6" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:27:18 crc kubenswrapper[5121]: {"timestamp": "2026-01-26T00:27:18+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:27:18 crc kubenswrapper[5121]: > Jan 26 00:27:24 crc kubenswrapper[5121]: I0126 00:27:24.077816 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:27:31 crc kubenswrapper[5121]: I0126 00:27:31.802117 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:31 crc kubenswrapper[5121]: I0126 00:27:31.802645 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:32 crc kubenswrapper[5121]: I0126 00:27:32.972412 5121 scope.go:117] "RemoveContainer" containerID="46e36e94ccf4bd052f15b1eb1bc83912554716c0cceaf42d3a3db1c1f758e192" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.144376 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489788-9f7zk"] Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153018 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153049 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153072 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" containerName="manage-dockerfile" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153080 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" containerName="manage-dockerfile" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153092 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="extract-utilities" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153098 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="extract-utilities" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153115 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="extract-content" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153120 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="extract-content" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153132 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34987d5f-649b-444c-a15e-482e13593729" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153137 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="34987d5f-649b-444c-a15e-482e13593729" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153301 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="34987d5f-649b-444c-a15e-482e13593729" containerName="oc" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153317 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1cdc1b28-c53b-411c-8056-5bce55d60e1d" containerName="registry-server" Jan 26 00:28:00 crc kubenswrapper[5121]: I0126 00:28:00.153329 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="dad1d4de-99d3-480a-b6fd-bad440e6bf75" containerName="manage-dockerfile" Jan 26 00:28:01 crc kubenswrapper[5121]: I0126 00:28:01.802382 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:28:01 crc kubenswrapper[5121]: I0126 00:28:01.803098 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.433843 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.437839 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.438028 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.439367 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.443587 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-9f7zk"] Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.477259 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r82m\" (UniqueName: \"kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m\") pod \"auto-csr-approver-29489788-9f7zk\" (UID: \"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0\") " pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.579303 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6r82m\" (UniqueName: \"kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m\") pod \"auto-csr-approver-29489788-9f7zk\" (UID: \"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0\") " pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.612927 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r82m\" (UniqueName: \"kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m\") pod \"auto-csr-approver-29489788-9f7zk\" (UID: \"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0\") " pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:04 crc kubenswrapper[5121]: I0126 00:28:04.755946 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:05 crc kubenswrapper[5121]: I0126 00:28:05.194132 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-9f7zk"] Jan 26 00:28:05 crc kubenswrapper[5121]: I0126 00:28:05.467741 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" event={"ID":"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0","Type":"ContainerStarted","Data":"6a442b835a2a29b2670e1d33f9211f740ea2eb98656a5895c25b608f2720828e"} Jan 26 00:28:07 crc kubenswrapper[5121]: I0126 00:28:07.510154 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" event={"ID":"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0","Type":"ContainerStarted","Data":"8655a4ab7161a3d596c431f478eaf375bc3cc7c6fee7efb90cd1e3cbb64e8aa8"} Jan 26 00:28:07 crc kubenswrapper[5121]: I0126 00:28:07.536409 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" podStartSLOduration=5.86966007 podStartE2EDuration="7.536383313s" podCreationTimestamp="2026-01-26 00:28:00 +0000 UTC" firstStartedPulling="2026-01-26 00:28:05.202635285 +0000 UTC m=+1116.361836410" lastFinishedPulling="2026-01-26 00:28:06.869358518 +0000 UTC m=+1118.028559653" observedRunningTime="2026-01-26 00:28:07.533061498 +0000 UTC m=+1118.692262623" watchObservedRunningTime="2026-01-26 00:28:07.536383313 +0000 UTC m=+1118.695584438" Jan 26 00:28:08 crc kubenswrapper[5121]: I0126 00:28:08.519194 5121 generic.go:358] "Generic (PLEG): container finished" podID="78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" containerID="8655a4ab7161a3d596c431f478eaf375bc3cc7c6fee7efb90cd1e3cbb64e8aa8" exitCode=0 Jan 26 00:28:08 crc kubenswrapper[5121]: I0126 00:28:08.519403 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" event={"ID":"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0","Type":"ContainerDied","Data":"8655a4ab7161a3d596c431f478eaf375bc3cc7c6fee7efb90cd1e3cbb64e8aa8"} Jan 26 00:28:09 crc kubenswrapper[5121]: I0126 00:28:09.807863 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:09 crc kubenswrapper[5121]: I0126 00:28:09.906933 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r82m\" (UniqueName: \"kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m\") pod \"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0\" (UID: \"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0\") " Jan 26 00:28:09 crc kubenswrapper[5121]: I0126 00:28:09.914516 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m" (OuterVolumeSpecName: "kube-api-access-6r82m") pod "78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" (UID: "78f93e1d-b2f8-46ec-9016-f2dbfa6012d0"). InnerVolumeSpecName "kube-api-access-6r82m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.009206 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6r82m\" (UniqueName: \"kubernetes.io/projected/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0-kube-api-access-6r82m\") on node \"crc\" DevicePath \"\"" Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.539129 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.539129 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489788-9f7zk" event={"ID":"78f93e1d-b2f8-46ec-9016-f2dbfa6012d0","Type":"ContainerDied","Data":"6a442b835a2a29b2670e1d33f9211f740ea2eb98656a5895c25b608f2720828e"} Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.540258 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a442b835a2a29b2670e1d33f9211f740ea2eb98656a5895c25b608f2720828e" Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.601623 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-74q2r"] Jan 26 00:28:10 crc kubenswrapper[5121]: I0126 00:28:10.610677 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489782-74q2r"] Jan 26 00:28:12 crc kubenswrapper[5121]: I0126 00:28:12.271358 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a3bd5ec-bc60-4df9-af1d-f70c63c5681d" path="/var/lib/kubelet/pods/5a3bd5ec-bc60-4df9-af1d-f70c63c5681d/volumes" Jan 26 00:28:31 crc kubenswrapper[5121]: I0126 00:28:31.802943 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:28:31 crc kubenswrapper[5121]: I0126 00:28:31.803998 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:31 crc kubenswrapper[5121]: I0126 00:28:31.804067 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:28:31 crc kubenswrapper[5121]: I0126 00:28:31.805079 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:28:31 crc kubenswrapper[5121]: I0126 00:28:31.805398 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f" gracePeriod=600 Jan 26 00:28:33 crc kubenswrapper[5121]: I0126 00:28:33.126023 5121 scope.go:117] "RemoveContainer" containerID="b337398270acf844d03020cb36ac67644d9d315b83dab3c543afbd4d159f1560" Jan 26 00:28:33 crc kubenswrapper[5121]: I0126 00:28:33.729092 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f" exitCode=0 Jan 26 00:28:33 crc kubenswrapper[5121]: I0126 00:28:33.729199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f"} Jan 26 00:28:33 crc kubenswrapper[5121]: I0126 00:28:33.729290 5121 scope.go:117] "RemoveContainer" containerID="8dff4e88b41be67d172c0dc3962b2a57b2fe7254550f8a45781d21ad403679a1" Jan 26 00:28:35 crc kubenswrapper[5121]: I0126 00:28:35.747835 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53"} Jan 26 00:29:07 crc kubenswrapper[5121]: I0126 00:29:07.991696 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:29:07 crc kubenswrapper[5121]: I0126 00:29:07.993109 5121 generic.go:358] "Generic (PLEG): container finished" podID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerID="fad7d787296fb0d677938a475081c9b187cfe2dd1566ca951e058ad3f8851977" exitCode=1 Jan 26 00:29:07 crc kubenswrapper[5121]: I0126 00:29:07.993216 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerDied","Data":"fad7d787296fb0d677938a475081c9b187cfe2dd1566ca951e058ad3f8851977"} Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.232375 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.233650 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255567 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255615 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255638 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255694 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255717 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255748 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255781 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255834 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255852 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255874 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88htm\" (UniqueName: \"kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255927 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.255979 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache\") pod \"82be670b-4a27-4319-8431-ac1b86d3fc1a\" (UID: \"82be670b-4a27-4319-8431-ac1b86d3fc1a\") " Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.257430 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.257504 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.257371 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.258024 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.260714 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.264703 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm" (OuterVolumeSpecName: "kube-api-access-88htm") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "kube-api-access-88htm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.267141 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-push") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "builder-dockercfg-n9bc6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.268639 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.278175 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-pull") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "builder-dockercfg-n9bc6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.302091 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357354 5121 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357395 5121 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357408 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357416 5121 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357426 5121 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357434 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357443 5121 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/82be670b-4a27-4319-8431-ac1b86d3fc1a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357451 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/82be670b-4a27-4319-8431-ac1b86d3fc1a-builder-dockercfg-n9bc6-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357459 5121 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.357466 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-88htm\" (UniqueName: \"kubernetes.io/projected/82be670b-4a27-4319-8431-ac1b86d3fc1a-kube-api-access-88htm\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.448584 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:29:09 crc kubenswrapper[5121]: I0126 00:29:09.458878 5121 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:10 crc kubenswrapper[5121]: I0126 00:29:10.007827 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:29:10 crc kubenswrapper[5121]: I0126 00:29:10.008840 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"82be670b-4a27-4319-8431-ac1b86d3fc1a","Type":"ContainerDied","Data":"557b4807447a79dc4010417b8aa58553a7a3a2117a54b1ab2c7bacf9ec604cf8"} Jan 26 00:29:10 crc kubenswrapper[5121]: I0126 00:29:10.008904 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557b4807447a79dc4010417b8aa58553a7a3a2117a54b1ab2c7bacf9ec604cf8" Jan 26 00:29:10 crc kubenswrapper[5121]: I0126 00:29:10.008860 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:29:11 crc kubenswrapper[5121]: I0126 00:29:11.103032 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "82be670b-4a27-4319-8431-ac1b86d3fc1a" (UID: "82be670b-4a27-4319-8431-ac1b86d3fc1a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:29:11 crc kubenswrapper[5121]: I0126 00:29:11.183560 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/82be670b-4a27-4319-8431-ac1b86d3fc1a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.546604 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.549970 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" containerName="oc" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550024 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" containerName="oc" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550112 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="manage-dockerfile" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550119 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="manage-dockerfile" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550152 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="git-clone" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550165 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="git-clone" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550192 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="docker-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550199 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="docker-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550557 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" containerName="oc" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.550573 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="82be670b-4a27-4319-8431-ac1b86d3fc1a" containerName="docker-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.555651 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.560561 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.560608 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-n9bc6\"" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.560636 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.560636 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.565583 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.712820 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.712886 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.712940 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713009 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713030 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713178 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713279 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdjp\" (UniqueName: \"kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713306 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713349 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713375 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713408 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.713432 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815436 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815510 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815574 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815614 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815657 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815694 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815790 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815865 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815865 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.815911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdjp\" (UniqueName: \"kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.816281 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.816290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.816615 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.816637 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.817389 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.817442 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.817526 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.817566 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.824112 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.824286 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.843614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdjp\" (UniqueName: \"kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp\") pod \"service-telemetry-operator-3-build\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:19 crc kubenswrapper[5121]: I0126 00:29:19.882549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:29:20 crc kubenswrapper[5121]: I0126 00:29:20.093187 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:29:21 crc kubenswrapper[5121]: I0126 00:29:21.103666 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerStarted","Data":"2bb15fa5904c139b7b68a601ae56b21a32fa2e6e8c590aa19fc53473d0489814"} Jan 26 00:29:21 crc kubenswrapper[5121]: I0126 00:29:21.104024 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerStarted","Data":"7ba459dbbaa88dbd81117f8f88888da594b87506d68eb252ca75ecfd0715cf47"} Jan 26 00:29:29 crc kubenswrapper[5121]: I0126 00:29:29.170371 5121 generic.go:358] "Generic (PLEG): container finished" podID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerID="2bb15fa5904c139b7b68a601ae56b21a32fa2e6e8c590aa19fc53473d0489814" exitCode=0 Jan 26 00:29:29 crc kubenswrapper[5121]: I0126 00:29:29.170493 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerDied","Data":"2bb15fa5904c139b7b68a601ae56b21a32fa2e6e8c590aa19fc53473d0489814"} Jan 26 00:29:30 crc kubenswrapper[5121]: I0126 00:29:30.178686 5121 generic.go:358] "Generic (PLEG): container finished" podID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerID="e51742e6afe1f7c8d8ba06a45b589277bc4753c82ebf74bd5dbe07ed6858c262" exitCode=0 Jan 26 00:29:30 crc kubenswrapper[5121]: I0126 00:29:30.178782 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerDied","Data":"e51742e6afe1f7c8d8ba06a45b589277bc4753c82ebf74bd5dbe07ed6858c262"} Jan 26 00:29:30 crc kubenswrapper[5121]: I0126 00:29:30.213136 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/manage-dockerfile/0.log" Jan 26 00:29:31 crc kubenswrapper[5121]: I0126 00:29:31.192378 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerStarted","Data":"8bdbfd6e8f4050a80940614c3abd1daf637786b5827237cb09b9b202db49f1eb"} Jan 26 00:29:31 crc kubenswrapper[5121]: I0126 00:29:31.231316 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=12.231293341 podStartE2EDuration="12.231293341s" podCreationTimestamp="2026-01-26 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:29:31.222512551 +0000 UTC m=+1202.381713676" watchObservedRunningTime="2026-01-26 00:29:31.231293341 +0000 UTC m=+1202.390494466" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.147371 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.153245 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.205316 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.206265 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.212881 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.213103 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.215783 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.216059 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.218493 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:29:33 crc kubenswrapper[5121]: I0126 00:29:33.218785 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.151329 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd"] Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.164443 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489790-sgqt2"] Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.165165 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.169512 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.170018 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.172327 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.178922 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.180323 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.180485 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.182139 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd"] Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.194739 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-sgqt2"] Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.359397 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwnp8\" (UniqueName: \"kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.359493 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.359635 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46q4\" (UniqueName: \"kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4\") pod \"auto-csr-approver-29489790-sgqt2\" (UID: \"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5\") " pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.359890 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.462316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.462472 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnp8\" (UniqueName: \"kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.462519 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.462542 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s46q4\" (UniqueName: \"kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4\") pod \"auto-csr-approver-29489790-sgqt2\" (UID: \"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5\") " pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.464411 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.472973 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.487935 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnp8\" (UniqueName: \"kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8\") pod \"collect-profiles-29489790-nw2pd\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.489727 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46q4\" (UniqueName: \"kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4\") pod \"auto-csr-approver-29489790-sgqt2\" (UID: \"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5\") " pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.506559 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.523607 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.814148 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-sgqt2"] Jan 26 00:30:00 crc kubenswrapper[5121]: W0126 00:30:00.819038 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53e9d6b8_6409_4cdd_8149_ac57bb7a0db5.slice/crio-02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d WatchSource:0}: Error finding container 02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d: Status 404 returned error can't find the container with id 02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d Jan 26 00:30:00 crc kubenswrapper[5121]: I0126 00:30:00.875879 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd"] Jan 26 00:30:00 crc kubenswrapper[5121]: W0126 00:30:00.887085 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d6aea8a_4cc0_42c1_a1a7_fc011ba72294.slice/crio-5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04 WatchSource:0}: Error finding container 5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04: Status 404 returned error can't find the container with id 5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04 Jan 26 00:30:01 crc kubenswrapper[5121]: I0126 00:30:01.429387 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" event={"ID":"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5","Type":"ContainerStarted","Data":"02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d"} Jan 26 00:30:01 crc kubenswrapper[5121]: I0126 00:30:01.432952 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" event={"ID":"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294","Type":"ContainerStarted","Data":"4dc115064902dcf10bbfb784e2d13104297fedd7388a0a58c2d857f21206a978"} Jan 26 00:30:01 crc kubenswrapper[5121]: I0126 00:30:01.433116 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" event={"ID":"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294","Type":"ContainerStarted","Data":"5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04"} Jan 26 00:30:01 crc kubenswrapper[5121]: I0126 00:30:01.459908 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" podStartSLOduration=1.459884698 podStartE2EDuration="1.459884698s" podCreationTimestamp="2026-01-26 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:30:01.459553469 +0000 UTC m=+1232.618754604" watchObservedRunningTime="2026-01-26 00:30:01.459884698 +0000 UTC m=+1232.619085823" Jan 26 00:30:02 crc kubenswrapper[5121]: I0126 00:30:02.441250 5121 generic.go:358] "Generic (PLEG): container finished" podID="9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" containerID="4dc115064902dcf10bbfb784e2d13104297fedd7388a0a58c2d857f21206a978" exitCode=0 Jan 26 00:30:02 crc kubenswrapper[5121]: I0126 00:30:02.441360 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" event={"ID":"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294","Type":"ContainerDied","Data":"4dc115064902dcf10bbfb784e2d13104297fedd7388a0a58c2d857f21206a978"} Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.730329 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.824501 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume\") pod \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.824797 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume\") pod \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.824839 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwnp8\" (UniqueName: \"kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8\") pod \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\" (UID: \"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294\") " Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.825787 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" (UID: "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.835028 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8" (OuterVolumeSpecName: "kube-api-access-mwnp8") pod "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" (UID: "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294"). InnerVolumeSpecName "kube-api-access-mwnp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.835133 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" (UID: "9d6aea8a-4cc0-42c1-a1a7-fc011ba72294"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.926685 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.926720 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwnp8\" (UniqueName: \"kubernetes.io/projected/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-kube-api-access-mwnp8\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:03 crc kubenswrapper[5121]: I0126 00:30:03.926731 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d6aea8a-4cc0-42c1-a1a7-fc011ba72294-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:04 crc kubenswrapper[5121]: I0126 00:30:04.460387 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" event={"ID":"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5","Type":"ContainerStarted","Data":"1f9e11b651d1343721b5cc9e424e9c48adae4fc5c890ec0d8fff4bd679f0edbf"} Jan 26 00:30:04 crc kubenswrapper[5121]: I0126 00:30:04.462804 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" event={"ID":"9d6aea8a-4cc0-42c1-a1a7-fc011ba72294","Type":"ContainerDied","Data":"5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04"} Jan 26 00:30:04 crc kubenswrapper[5121]: I0126 00:30:04.462963 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5804e65d499753757b76b4cbfb87c891fb02a05e33f5f64591f86c138f905e04" Jan 26 00:30:04 crc kubenswrapper[5121]: I0126 00:30:04.462825 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-nw2pd" Jan 26 00:30:04 crc kubenswrapper[5121]: I0126 00:30:04.486209 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" podStartSLOduration=1.566748849 podStartE2EDuration="4.486180437s" podCreationTimestamp="2026-01-26 00:30:00 +0000 UTC" firstStartedPulling="2026-01-26 00:30:00.821453488 +0000 UTC m=+1231.980654613" lastFinishedPulling="2026-01-26 00:30:03.740885076 +0000 UTC m=+1234.900086201" observedRunningTime="2026-01-26 00:30:04.482011269 +0000 UTC m=+1235.641212404" watchObservedRunningTime="2026-01-26 00:30:04.486180437 +0000 UTC m=+1235.645381562" Jan 26 00:30:05 crc kubenswrapper[5121]: I0126 00:30:05.478393 5121 generic.go:358] "Generic (PLEG): container finished" podID="53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" containerID="1f9e11b651d1343721b5cc9e424e9c48adae4fc5c890ec0d8fff4bd679f0edbf" exitCode=0 Jan 26 00:30:05 crc kubenswrapper[5121]: I0126 00:30:05.480198 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" event={"ID":"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5","Type":"ContainerDied","Data":"1f9e11b651d1343721b5cc9e424e9c48adae4fc5c890ec0d8fff4bd679f0edbf"} Jan 26 00:30:06 crc kubenswrapper[5121]: I0126 00:30:06.747486 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:06 crc kubenswrapper[5121]: I0126 00:30:06.772625 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46q4\" (UniqueName: \"kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4\") pod \"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5\" (UID: \"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5\") " Jan 26 00:30:06 crc kubenswrapper[5121]: I0126 00:30:06.781260 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4" (OuterVolumeSpecName: "kube-api-access-s46q4") pod "53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" (UID: "53e9d6b8-6409-4cdd-8149-ac57bb7a0db5"). InnerVolumeSpecName "kube-api-access-s46q4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:06 crc kubenswrapper[5121]: I0126 00:30:06.874162 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s46q4\" (UniqueName: \"kubernetes.io/projected/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5-kube-api-access-s46q4\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[5121]: I0126 00:30:07.511707 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" Jan 26 00:30:07 crc kubenswrapper[5121]: I0126 00:30:07.511696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489790-sgqt2" event={"ID":"53e9d6b8-6409-4cdd-8149-ac57bb7a0db5","Type":"ContainerDied","Data":"02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d"} Jan 26 00:30:07 crc kubenswrapper[5121]: I0126 00:30:07.512637 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e248ea7e347dab722729e821547547b52e9a64c55d2b28d5e67302e7569e7d" Jan 26 00:30:07 crc kubenswrapper[5121]: I0126 00:30:07.563137 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-dgk4j"] Jan 26 00:30:07 crc kubenswrapper[5121]: I0126 00:30:07.567935 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489784-dgk4j"] Jan 26 00:30:08 crc kubenswrapper[5121]: I0126 00:30:08.265611 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88412b4-4371-426f-85fb-313e41c1e075" path="/var/lib/kubelet/pods/d88412b4-4371-426f-85fb-313e41c1e075/volumes" Jan 26 00:30:33 crc kubenswrapper[5121]: I0126 00:30:33.365890 5121 scope.go:117] "RemoveContainer" containerID="ba6c23d2c03ddb6b18f94c59cecf85f17e1ee884b123e109fd422111d1f0f35e" Jan 26 00:30:44 crc kubenswrapper[5121]: I0126 00:30:44.036961 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:30:44 crc kubenswrapper[5121]: I0126 00:30:44.040106 5121 generic.go:358] "Generic (PLEG): container finished" podID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerID="8bdbfd6e8f4050a80940614c3abd1daf637786b5827237cb09b9b202db49f1eb" exitCode=1 Jan 26 00:30:44 crc kubenswrapper[5121]: I0126 00:30:44.040197 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerDied","Data":"8bdbfd6e8f4050a80940614c3abd1daf637786b5827237cb09b9b202db49f1eb"} Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.314617 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.315971 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412410 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412480 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412509 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412533 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412561 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.412869 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413048 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413092 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413185 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413113 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413334 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmdjp\" (UniqueName: \"kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413376 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413427 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets\") pod \"15fe1e64-e056-4d07-97a6-d19ad38afe03\" (UID: \"15fe1e64-e056-4d07-97a6-d19ad38afe03\") " Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413657 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413685 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.413817 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.414153 5121 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.414175 5121 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.414195 5121 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.414201 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.415246 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.421294 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-push") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "builder-dockercfg-n9bc6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.421437 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp" (OuterVolumeSpecName: "kube-api-access-jmdjp") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "kube-api-access-jmdjp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.423986 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-pull") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "builder-dockercfg-n9bc6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.448402 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515075 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jmdjp\" (UniqueName: \"kubernetes.io/projected/15fe1e64-e056-4d07-97a6-d19ad38afe03-kube-api-access-jmdjp\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515115 5121 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15fe1e64-e056-4d07-97a6-d19ad38afe03-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515125 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515134 5121 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515176 5121 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515185 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.515194 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/15fe1e64-e056-4d07-97a6-d19ad38afe03-builder-dockercfg-n9bc6-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.633130 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:45 crc kubenswrapper[5121]: I0126 00:30:45.719114 5121 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:46 crc kubenswrapper[5121]: I0126 00:30:46.063274 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:30:46 crc kubenswrapper[5121]: I0126 00:30:46.064553 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"15fe1e64-e056-4d07-97a6-d19ad38afe03","Type":"ContainerDied","Data":"7ba459dbbaa88dbd81117f8f88888da594b87506d68eb252ca75ecfd0715cf47"} Jan 26 00:30:46 crc kubenswrapper[5121]: I0126 00:30:46.064619 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba459dbbaa88dbd81117f8f88888da594b87506d68eb252ca75ecfd0715cf47" Jan 26 00:30:46 crc kubenswrapper[5121]: I0126 00:30:46.064860 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:30:47 crc kubenswrapper[5121]: I0126 00:30:47.325277 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "15fe1e64-e056-4d07-97a6-d19ad38afe03" (UID: "15fe1e64-e056-4d07-97a6-d19ad38afe03"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:30:47 crc kubenswrapper[5121]: I0126 00:30:47.344103 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15fe1e64-e056-4d07-97a6-d19ad38afe03-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.134010 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135458 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135476 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135492 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="git-clone" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135500 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="git-clone" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135510 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135520 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135543 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="manage-dockerfile" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135551 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="manage-dockerfile" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135569 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135577 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135741 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" containerName="oc" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135754 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="15fe1e64-e056-4d07-97a6-d19ad38afe03" containerName="docker-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.135786 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d6aea8a-4cc0-42c1-a1a7-fc011ba72294" containerName="collect-profiles" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.314228 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.314470 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.317598 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-n9bc6\"" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.320457 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.320726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.320871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346377 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346436 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346471 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346534 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346604 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlfn\" (UniqueName: \"kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346669 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346847 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346896 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346949 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.346976 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.448699 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnlfn\" (UniqueName: \"kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.448822 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.448982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449249 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449355 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449403 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449401 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449459 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449645 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449678 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449714 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.449775 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450205 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450217 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450545 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450625 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.450982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.451482 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.456296 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.456284 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.469598 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnlfn\" (UniqueName: \"kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn\") pod \"service-telemetry-operator-4-build\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.635444 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.862370 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:30:56 crc kubenswrapper[5121]: I0126 00:30:56.870080 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:30:57 crc kubenswrapper[5121]: I0126 00:30:57.152520 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerStarted","Data":"6a33d4e17a963169d71bdc9502e7068b41710ac863f7f0ba7d73a9e2e5106add"} Jan 26 00:30:58 crc kubenswrapper[5121]: I0126 00:30:58.162978 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerStarted","Data":"2df7e7201dd49d0fd47b3d60c0f946f50056c48d227653df9f1176e6d8533a88"} Jan 26 00:31:01 crc kubenswrapper[5121]: I0126 00:31:01.802258 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:01 crc kubenswrapper[5121]: I0126 00:31:01.803002 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:06 crc kubenswrapper[5121]: I0126 00:31:06.254372 5121 generic.go:358] "Generic (PLEG): container finished" podID="39a99d00-3116-42fa-95af-f93382aa1930" containerID="2df7e7201dd49d0fd47b3d60c0f946f50056c48d227653df9f1176e6d8533a88" exitCode=0 Jan 26 00:31:06 crc kubenswrapper[5121]: I0126 00:31:06.254524 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerDied","Data":"2df7e7201dd49d0fd47b3d60c0f946f50056c48d227653df9f1176e6d8533a88"} Jan 26 00:31:07 crc kubenswrapper[5121]: I0126 00:31:07.275021 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerStarted","Data":"594b4895500cbebff7d952250c1e22e154be99fa1a3730b308048cb823e964e9"} Jan 26 00:31:08 crc kubenswrapper[5121]: I0126 00:31:08.285912 5121 generic.go:358] "Generic (PLEG): container finished" podID="39a99d00-3116-42fa-95af-f93382aa1930" containerID="594b4895500cbebff7d952250c1e22e154be99fa1a3730b308048cb823e964e9" exitCode=0 Jan 26 00:31:08 crc kubenswrapper[5121]: I0126 00:31:08.285991 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerDied","Data":"594b4895500cbebff7d952250c1e22e154be99fa1a3730b308048cb823e964e9"} Jan 26 00:31:09 crc kubenswrapper[5121]: I0126 00:31:09.318482 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerStarted","Data":"240d5dac08c26f34e836a6fe0ef5397aaf94d8b39206d169324abe3503a9ec43"} Jan 26 00:31:09 crc kubenswrapper[5121]: I0126 00:31:09.402486 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-4-build" podStartSLOduration=13.40245879 podStartE2EDuration="13.40245879s" podCreationTimestamp="2026-01-26 00:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:31:09.402363067 +0000 UTC m=+1300.561564182" watchObservedRunningTime="2026-01-26 00:31:09.40245879 +0000 UTC m=+1300.561659915" Jan 26 00:31:31 crc kubenswrapper[5121]: I0126 00:31:31.802021 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:31 crc kubenswrapper[5121]: I0126 00:31:31.804398 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.144059 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489792-qcdfp"] Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.172403 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-qcdfp"] Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.172883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.180755 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.183646 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.183659 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.290534 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7ft\" (UniqueName: \"kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft\") pod \"auto-csr-approver-29489792-qcdfp\" (UID: \"a141d6ed-0de3-4599-85bc-881fafe98e8f\") " pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.392316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7ft\" (UniqueName: \"kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft\") pod \"auto-csr-approver-29489792-qcdfp\" (UID: \"a141d6ed-0de3-4599-85bc-881fafe98e8f\") " pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.416423 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7ft\" (UniqueName: \"kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft\") pod \"auto-csr-approver-29489792-qcdfp\" (UID: \"a141d6ed-0de3-4599-85bc-881fafe98e8f\") " pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.508867 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:00 crc kubenswrapper[5121]: I0126 00:32:00.944487 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-qcdfp"] Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.734703 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" event={"ID":"a141d6ed-0de3-4599-85bc-881fafe98e8f","Type":"ContainerStarted","Data":"c7122af941064d5a863f28147b2edea2b03af7521e6f86115524bd4506e063f8"} Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.801612 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.802245 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.802374 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.803455 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:32:01 crc kubenswrapper[5121]: I0126 00:32:01.803628 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53" gracePeriod=600 Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.756831 5121 generic.go:358] "Generic (PLEG): container finished" podID="a141d6ed-0de3-4599-85bc-881fafe98e8f" containerID="fe3bc77ad2e30cd08c54bdd775bf8ef6e5e28855d49dc3f97c19bb22b1f3415a" exitCode=0 Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.756907 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" event={"ID":"a141d6ed-0de3-4599-85bc-881fafe98e8f","Type":"ContainerDied","Data":"fe3bc77ad2e30cd08c54bdd775bf8ef6e5e28855d49dc3f97c19bb22b1f3415a"} Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.773945 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53" exitCode=0 Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.774051 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53"} Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.774485 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e"} Jan 26 00:32:02 crc kubenswrapper[5121]: I0126 00:32:02.774542 5121 scope.go:117] "RemoveContainer" containerID="d40065c8f3cb43a8730adbc34bd9fe8db62d85dab732f2dbdec9e5ddf9d6e21f" Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.051200 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.150572 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7ft\" (UniqueName: \"kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft\") pod \"a141d6ed-0de3-4599-85bc-881fafe98e8f\" (UID: \"a141d6ed-0de3-4599-85bc-881fafe98e8f\") " Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.157568 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft" (OuterVolumeSpecName: "kube-api-access-9w7ft") pod "a141d6ed-0de3-4599-85bc-881fafe98e8f" (UID: "a141d6ed-0de3-4599-85bc-881fafe98e8f"). InnerVolumeSpecName "kube-api-access-9w7ft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.252526 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9w7ft\" (UniqueName: \"kubernetes.io/projected/a141d6ed-0de3-4599-85bc-881fafe98e8f-kube-api-access-9w7ft\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.816282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" event={"ID":"a141d6ed-0de3-4599-85bc-881fafe98e8f","Type":"ContainerDied","Data":"c7122af941064d5a863f28147b2edea2b03af7521e6f86115524bd4506e063f8"} Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.816825 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7122af941064d5a863f28147b2edea2b03af7521e6f86115524bd4506e063f8" Jan 26 00:32:04 crc kubenswrapper[5121]: I0126 00:32:04.816321 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489792-qcdfp" Jan 26 00:32:05 crc kubenswrapper[5121]: I0126 00:32:05.127186 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-ghsf9"] Jan 26 00:32:05 crc kubenswrapper[5121]: I0126 00:32:05.132745 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489786-ghsf9"] Jan 26 00:32:06 crc kubenswrapper[5121]: I0126 00:32:06.265258 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34987d5f-649b-444c-a15e-482e13593729" path="/var/lib/kubelet/pods/34987d5f-649b-444c-a15e-482e13593729/volumes" Jan 26 00:32:22 crc kubenswrapper[5121]: I0126 00:32:22.971480 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:32:22 crc kubenswrapper[5121]: I0126 00:32:22.973446 5121 generic.go:358] "Generic (PLEG): container finished" podID="39a99d00-3116-42fa-95af-f93382aa1930" containerID="240d5dac08c26f34e836a6fe0ef5397aaf94d8b39206d169324abe3503a9ec43" exitCode=1 Jan 26 00:32:22 crc kubenswrapper[5121]: I0126 00:32:22.973568 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerDied","Data":"240d5dac08c26f34e836a6fe0ef5397aaf94d8b39206d169324abe3503a9ec43"} Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.533517 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.535244 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700148 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700253 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700310 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnlfn\" (UniqueName: \"kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700330 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700403 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700495 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700543 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700691 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700748 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700788 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700839 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.700992 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles\") pod \"39a99d00-3116-42fa-95af-f93382aa1930\" (UID: \"39a99d00-3116-42fa-95af-f93382aa1930\") " Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.701027 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.702058 5121 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.702149 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.702240 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.702295 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.702318 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.703050 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.709070 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-pull") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "builder-dockercfg-n9bc6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.711831 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn" (OuterVolumeSpecName: "kube-api-access-bnlfn") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "kube-api-access-bnlfn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.718713 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-push") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "builder-dockercfg-n9bc6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.751037 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804096 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804145 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804156 5121 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804165 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/39a99d00-3116-42fa-95af-f93382aa1930-builder-dockercfg-n9bc6-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804174 5121 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804184 5121 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804193 5121 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/39a99d00-3116-42fa-95af-f93382aa1930-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804205 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bnlfn\" (UniqueName: \"kubernetes.io/projected/39a99d00-3116-42fa-95af-f93382aa1930-kube-api-access-bnlfn\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.804213 5121 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39a99d00-3116-42fa-95af-f93382aa1930-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.905160 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.990651 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.991800 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"39a99d00-3116-42fa-95af-f93382aa1930","Type":"ContainerDied","Data":"6a33d4e17a963169d71bdc9502e7068b41710ac863f7f0ba7d73a9e2e5106add"} Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.991840 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a33d4e17a963169d71bdc9502e7068b41710ac863f7f0ba7d73a9e2e5106add" Jan 26 00:32:24 crc kubenswrapper[5121]: I0126 00:32:24.991920 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:32:25 crc kubenswrapper[5121]: I0126 00:32:25.006359 5121 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:26 crc kubenswrapper[5121]: I0126 00:32:26.546270 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "39a99d00-3116-42fa-95af-f93382aa1930" (UID: "39a99d00-3116-42fa-95af-f93382aa1930"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:32:26 crc kubenswrapper[5121]: I0126 00:32:26.627904 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/39a99d00-3116-42fa-95af-f93382aa1930-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:33 crc kubenswrapper[5121]: I0126 00:32:33.506811 5121 scope.go:117] "RemoveContainer" containerID="d7101112d14185db154a66a8852229eb8522dcdf484dccbe76bbfa23cda91564" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.144892 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146589 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="docker-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146612 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="docker-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146638 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="manage-dockerfile" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146649 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="manage-dockerfile" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146663 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="git-clone" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146670 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="git-clone" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146703 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a141d6ed-0de3-4599-85bc-881fafe98e8f" containerName="oc" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146710 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a141d6ed-0de3-4599-85bc-881fafe98e8f" containerName="oc" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146939 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a141d6ed-0de3-4599-85bc-881fafe98e8f" containerName="oc" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.146957 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="39a99d00-3116-42fa-95af-f93382aa1930" containerName="docker-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.152567 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.156868 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.157053 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-n9bc6\"" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.157270 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.157957 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.168893 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.307616 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.307684 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.307736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.307985 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ktc\" (UniqueName: \"kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308150 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308224 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308273 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308311 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308370 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308428 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308452 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.308478 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.410964 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.411042 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.411080 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.411158 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.411336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.411607 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412156 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412394 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412439 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412680 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412799 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ktc\" (UniqueName: \"kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412838 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412895 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.412971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413016 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413045 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413060 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413171 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413710 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.413751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.414008 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.419649 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.420978 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.434668 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ktc\" (UniqueName: \"kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc\") pod \"service-telemetry-operator-5-build\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.476181 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:32:35 crc kubenswrapper[5121]: I0126 00:32:35.741967 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:32:36 crc kubenswrapper[5121]: I0126 00:32:36.093568 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerStarted","Data":"8ff920c8b2a263560ea13240a7925c33982d9045e9727e5a50b7b49ec146a9ae"} Jan 26 00:32:37 crc kubenswrapper[5121]: I0126 00:32:37.122621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerStarted","Data":"8a11d8b93e339e53641c42bb213674e0e2da4d36b0b84d4e2ae249fc44373335"} Jan 26 00:32:45 crc kubenswrapper[5121]: I0126 00:32:45.203270 5121 generic.go:358] "Generic (PLEG): container finished" podID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerID="8a11d8b93e339e53641c42bb213674e0e2da4d36b0b84d4e2ae249fc44373335" exitCode=0 Jan 26 00:32:45 crc kubenswrapper[5121]: I0126 00:32:45.203371 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerDied","Data":"8a11d8b93e339e53641c42bb213674e0e2da4d36b0b84d4e2ae249fc44373335"} Jan 26 00:32:46 crc kubenswrapper[5121]: I0126 00:32:46.216544 5121 generic.go:358] "Generic (PLEG): container finished" podID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerID="b1e604c6beeb90fa7b2c67dbe0065884bce5b7bc8c00382ca95e08fe4f512ae7" exitCode=0 Jan 26 00:32:46 crc kubenswrapper[5121]: I0126 00:32:46.216661 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerDied","Data":"b1e604c6beeb90fa7b2c67dbe0065884bce5b7bc8c00382ca95e08fe4f512ae7"} Jan 26 00:32:46 crc kubenswrapper[5121]: I0126 00:32:46.258501 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/manage-dockerfile/0.log" Jan 26 00:32:47 crc kubenswrapper[5121]: I0126 00:32:47.230094 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerStarted","Data":"0994ebc86fe86e683207b5e94e0ea82573e1426911378e5ffb86d0187605b118"} Jan 26 00:32:47 crc kubenswrapper[5121]: I0126 00:32:47.271930 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=12.271873741 podStartE2EDuration="12.271873741s" podCreationTimestamp="2026-01-26 00:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:32:47.260074777 +0000 UTC m=+1398.419275922" watchObservedRunningTime="2026-01-26 00:32:47.271873741 +0000 UTC m=+1398.431074866" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.070185 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.071751 5121 generic.go:358] "Generic (PLEG): container finished" podID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerID="0994ebc86fe86e683207b5e94e0ea82573e1426911378e5ffb86d0187605b118" exitCode=1 Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.072050 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerDied","Data":"0994ebc86fe86e683207b5e94e0ea82573e1426911378e5ffb86d0187605b118"} Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.147991 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489794-7vv9p"] Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.154822 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.157717 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.158087 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.158417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.163318 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-7vv9p"] Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.182375 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x47s\" (UniqueName: \"kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s\") pod \"auto-csr-approver-29489794-7vv9p\" (UID: \"a171d695-ffb7-47b1-9c43-0800ab8d9c59\") " pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.284068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2x47s\" (UniqueName: \"kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s\") pod \"auto-csr-approver-29489794-7vv9p\" (UID: \"a171d695-ffb7-47b1-9c43-0800ab8d9c59\") " pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.313529 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x47s\" (UniqueName: \"kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s\") pod \"auto-csr-approver-29489794-7vv9p\" (UID: \"a171d695-ffb7-47b1-9c43-0800ab8d9c59\") " pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.480263 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:00 crc kubenswrapper[5121]: I0126 00:34:00.717074 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-7vv9p"] Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.082259 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" event={"ID":"a171d695-ffb7-47b1-9c43-0800ab8d9c59","Type":"ContainerStarted","Data":"f40ac8c2c59d5cae36c69ace3e843cc5946c47d456dee3b810c625d1428f81f0"} Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.360054 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.360941 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400403 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400483 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400536 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400554 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400575 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ktc\" (UniqueName: \"kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400600 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400640 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400696 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400722 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400750 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400853 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400893 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull\") pod \"b497883f-da14-4bfe-8e19-bba4b32b7f79\" (UID: \"b497883f-da14-4bfe-8e19-bba4b32b7f79\") " Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.400992 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.402356 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.402837 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.402873 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.402874 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.403053 5121 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.405299 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.409902 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc" (OuterVolumeSpecName: "kube-api-access-v5ktc") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "kube-api-access-v5ktc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.410026 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-pull") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "builder-dockercfg-n9bc6-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.412900 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push" (OuterVolumeSpecName: "builder-dockercfg-n9bc6-push") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "builder-dockercfg-n9bc6-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.438112 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505135 5121 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505182 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-pull\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505201 5121 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-n9bc6-push\" (UniqueName: \"kubernetes.io/secret/b497883f-da14-4bfe-8e19-bba4b32b7f79-builder-dockercfg-n9bc6-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505213 5121 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505226 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5ktc\" (UniqueName: \"kubernetes.io/projected/b497883f-da14-4bfe-8e19-bba4b32b7f79-kube-api-access-v5ktc\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505237 5121 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505249 5121 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505261 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.505273 5121 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.642255 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:01 crc kubenswrapper[5121]: I0126 00:34:01.709272 5121 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.092787 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.093820 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"b497883f-da14-4bfe-8e19-bba4b32b7f79","Type":"ContainerDied","Data":"8ff920c8b2a263560ea13240a7925c33982d9045e9727e5a50b7b49ec146a9ae"} Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.093853 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff920c8b2a263560ea13240a7925c33982d9045e9727e5a50b7b49ec146a9ae" Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.093936 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.097150 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" event={"ID":"a171d695-ffb7-47b1-9c43-0800ab8d9c59","Type":"ContainerStarted","Data":"768ffd361758d3df5cfac75c558da6538fff7a45adfe432a29f23c07a8d81951"} Jan 26 00:34:02 crc kubenswrapper[5121]: I0126 00:34:02.121313 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" podStartSLOduration=1.204868479 podStartE2EDuration="2.121275227s" podCreationTimestamp="2026-01-26 00:34:00 +0000 UTC" firstStartedPulling="2026-01-26 00:34:00.726128074 +0000 UTC m=+1471.885329199" lastFinishedPulling="2026-01-26 00:34:01.642534822 +0000 UTC m=+1472.801735947" observedRunningTime="2026-01-26 00:34:02.120102454 +0000 UTC m=+1473.279303599" watchObservedRunningTime="2026-01-26 00:34:02.121275227 +0000 UTC m=+1473.280476352" Jan 26 00:34:03 crc kubenswrapper[5121]: I0126 00:34:03.110950 5121 generic.go:358] "Generic (PLEG): container finished" podID="a171d695-ffb7-47b1-9c43-0800ab8d9c59" containerID="768ffd361758d3df5cfac75c558da6538fff7a45adfe432a29f23c07a8d81951" exitCode=0 Jan 26 00:34:03 crc kubenswrapper[5121]: I0126 00:34:03.111083 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" event={"ID":"a171d695-ffb7-47b1-9c43-0800ab8d9c59","Type":"ContainerDied","Data":"768ffd361758d3df5cfac75c558da6538fff7a45adfe432a29f23c07a8d81951"} Jan 26 00:34:03 crc kubenswrapper[5121]: I0126 00:34:03.308220 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b497883f-da14-4bfe-8e19-bba4b32b7f79" (UID: "b497883f-da14-4bfe-8e19-bba4b32b7f79"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:34:03 crc kubenswrapper[5121]: I0126 00:34:03.344499 5121 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b497883f-da14-4bfe-8e19-bba4b32b7f79-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:04 crc kubenswrapper[5121]: I0126 00:34:04.378686 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:04 crc kubenswrapper[5121]: I0126 00:34:04.467036 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x47s\" (UniqueName: \"kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s\") pod \"a171d695-ffb7-47b1-9c43-0800ab8d9c59\" (UID: \"a171d695-ffb7-47b1-9c43-0800ab8d9c59\") " Jan 26 00:34:04 crc kubenswrapper[5121]: I0126 00:34:04.476695 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s" (OuterVolumeSpecName: "kube-api-access-2x47s") pod "a171d695-ffb7-47b1-9c43-0800ab8d9c59" (UID: "a171d695-ffb7-47b1-9c43-0800ab8d9c59"). InnerVolumeSpecName "kube-api-access-2x47s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:34:04 crc kubenswrapper[5121]: I0126 00:34:04.568884 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2x47s\" (UniqueName: \"kubernetes.io/projected/a171d695-ffb7-47b1-9c43-0800ab8d9c59-kube-api-access-2x47s\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:05 crc kubenswrapper[5121]: I0126 00:34:05.133422 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" Jan 26 00:34:05 crc kubenswrapper[5121]: I0126 00:34:05.133468 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489794-7vv9p" event={"ID":"a171d695-ffb7-47b1-9c43-0800ab8d9c59","Type":"ContainerDied","Data":"f40ac8c2c59d5cae36c69ace3e843cc5946c47d456dee3b810c625d1428f81f0"} Jan 26 00:34:05 crc kubenswrapper[5121]: I0126 00:34:05.133542 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40ac8c2c59d5cae36c69ace3e843cc5946c47d456dee3b810c625d1428f81f0" Jan 26 00:34:05 crc kubenswrapper[5121]: I0126 00:34:05.197203 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-9f7zk"] Jan 26 00:34:05 crc kubenswrapper[5121]: I0126 00:34:05.205039 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489788-9f7zk"] Jan 26 00:34:06 crc kubenswrapper[5121]: I0126 00:34:06.266125 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f93e1d-b2f8-46ec-9016-f2dbfa6012d0" path="/var/lib/kubelet/pods/78f93e1d-b2f8-46ec-9016-f2dbfa6012d0/volumes" Jan 26 00:34:31 crc kubenswrapper[5121]: I0126 00:34:31.801846 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:31 crc kubenswrapper[5121]: I0126 00:34:31.802638 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.240649 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.242276 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.243882 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.244341 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.247128 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.247126 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.250668 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.251946 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.301932 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.302132 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.309397 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.310126 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.310972 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.311541 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.313700 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.314170 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:34:33 crc kubenswrapper[5121]: I0126 00:34:33.665613 5121 scope.go:117] "RemoveContainer" containerID="8655a4ab7161a3d596c431f478eaf375bc3cc7c6fee7efb90cd1e3cbb64e8aa8" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.475254 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qnhhj/must-gather-d2bp6"] Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477522 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="manage-dockerfile" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477573 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="manage-dockerfile" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477612 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a171d695-ffb7-47b1-9c43-0800ab8d9c59" containerName="oc" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477622 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a171d695-ffb7-47b1-9c43-0800ab8d9c59" containerName="oc" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477643 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="docker-build" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477651 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="docker-build" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477692 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="git-clone" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.477700 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="git-clone" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.478086 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="b497883f-da14-4bfe-8e19-bba4b32b7f79" containerName="docker-build" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.478105 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a171d695-ffb7-47b1-9c43-0800ab8d9c59" containerName="oc" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.492050 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.501842 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-qnhhj\"/\"kube-root-ca.crt\"" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.502005 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-qnhhj\"/\"openshift-service-ca.crt\"" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.502136 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-qnhhj\"/\"default-dockercfg-ghgv6\"" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.521732 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qnhhj/must-gather-d2bp6"] Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.603471 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8mn\" (UniqueName: \"kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.603526 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.705944 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8mn\" (UniqueName: \"kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.706055 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.706535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.733675 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8mn\" (UniqueName: \"kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn\") pod \"must-gather-d2bp6\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:50 crc kubenswrapper[5121]: I0126 00:34:50.832325 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:34:51 crc kubenswrapper[5121]: I0126 00:34:51.214958 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qnhhj/must-gather-d2bp6"] Jan 26 00:34:51 crc kubenswrapper[5121]: I0126 00:34:51.554692 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" event={"ID":"c959418d-f3dc-4e83-93ba-fe643c9c9e79","Type":"ContainerStarted","Data":"67341d33ce6af899bbf7d87be09b6056990d5a3b7f7d24eee2050317e3706b82"} Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.402899 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.422461 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.422739 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.544046 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xft\" (UniqueName: \"kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.544129 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.544154 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.645299 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.645350 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.645442 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92xft\" (UniqueName: \"kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.645956 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.645967 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.674711 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xft\" (UniqueName: \"kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft\") pod \"certified-operators-lnczh\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:52 crc kubenswrapper[5121]: I0126 00:34:52.753123 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:34:53 crc kubenswrapper[5121]: I0126 00:34:53.128114 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:34:53 crc kubenswrapper[5121]: W0126 00:34:53.134593 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55f133d1_5dd7_4341_8c4a_d53fa022ea72.slice/crio-100ef1e83cfb1c4fd496e5b662678b70a4f4a2a9902513fbd4d5cf27510e684b WatchSource:0}: Error finding container 100ef1e83cfb1c4fd496e5b662678b70a4f4a2a9902513fbd4d5cf27510e684b: Status 404 returned error can't find the container with id 100ef1e83cfb1c4fd496e5b662678b70a4f4a2a9902513fbd4d5cf27510e684b Jan 26 00:34:53 crc kubenswrapper[5121]: I0126 00:34:53.614277 5121 generic.go:358] "Generic (PLEG): container finished" podID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerID="0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9" exitCode=0 Jan 26 00:34:53 crc kubenswrapper[5121]: I0126 00:34:53.614797 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerDied","Data":"0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9"} Jan 26 00:34:53 crc kubenswrapper[5121]: I0126 00:34:53.614837 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerStarted","Data":"100ef1e83cfb1c4fd496e5b662678b70a4f4a2a9902513fbd4d5cf27510e684b"} Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.800732 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.845935 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.846297 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.880869 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxkw\" (UniqueName: \"kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.881128 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.881738 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.983040 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.983128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxkw\" (UniqueName: \"kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.983309 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.984163 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:55 crc kubenswrapper[5121]: I0126 00:34:55.984430 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:56 crc kubenswrapper[5121]: I0126 00:34:56.005485 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxkw\" (UniqueName: \"kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw\") pod \"redhat-operators-tlzks\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:56 crc kubenswrapper[5121]: I0126 00:34:56.165141 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:34:58 crc kubenswrapper[5121]: I0126 00:34:58.673997 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerStarted","Data":"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108"} Jan 26 00:34:58 crc kubenswrapper[5121]: I0126 00:34:58.747281 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:34:58 crc kubenswrapper[5121]: W0126 00:34:58.767266 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode671ad9d_925b_4143_aeba_285abb548362.slice/crio-2224dc69a8aff61220378aed4bac0685a409dd2028c8b4e0c8b6443c11e3a1fd WatchSource:0}: Error finding container 2224dc69a8aff61220378aed4bac0685a409dd2028c8b4e0c8b6443c11e3a1fd: Status 404 returned error can't find the container with id 2224dc69a8aff61220378aed4bac0685a409dd2028c8b4e0c8b6443c11e3a1fd Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.684647 5121 generic.go:358] "Generic (PLEG): container finished" podID="e671ad9d-925b-4143-aeba-285abb548362" containerID="86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc" exitCode=0 Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.684793 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerDied","Data":"86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc"} Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.685231 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerStarted","Data":"2224dc69a8aff61220378aed4bac0685a409dd2028c8b4e0c8b6443c11e3a1fd"} Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.692194 5121 generic.go:358] "Generic (PLEG): container finished" podID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerID="2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108" exitCode=0 Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.692393 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerDied","Data":"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108"} Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.696256 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" event={"ID":"c959418d-f3dc-4e83-93ba-fe643c9c9e79","Type":"ContainerStarted","Data":"85400845f60c667265cbde43ce739d30d443c4c3881bb32fcb0b92e1f57c6861"} Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.696314 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" event={"ID":"c959418d-f3dc-4e83-93ba-fe643c9c9e79","Type":"ContainerStarted","Data":"6c659b14738e6c553ed2ad3f521b3db2666870733af672318c32ea122db890f1"} Jan 26 00:34:59 crc kubenswrapper[5121]: I0126 00:34:59.735017 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" podStartSLOduration=2.586471069 podStartE2EDuration="9.734982322s" podCreationTimestamp="2026-01-26 00:34:50 +0000 UTC" firstStartedPulling="2026-01-26 00:34:51.24201581 +0000 UTC m=+1522.401216925" lastFinishedPulling="2026-01-26 00:34:58.390527053 +0000 UTC m=+1529.549728178" observedRunningTime="2026-01-26 00:34:59.730002051 +0000 UTC m=+1530.889203206" watchObservedRunningTime="2026-01-26 00:34:59.734982322 +0000 UTC m=+1530.894183447" Jan 26 00:35:00 crc kubenswrapper[5121]: I0126 00:35:00.704919 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerStarted","Data":"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff"} Jan 26 00:35:00 crc kubenswrapper[5121]: I0126 00:35:00.710561 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerStarted","Data":"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c"} Jan 26 00:35:00 crc kubenswrapper[5121]: I0126 00:35:00.775676 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lnczh" podStartSLOduration=4.080982798 podStartE2EDuration="8.775650825s" podCreationTimestamp="2026-01-26 00:34:52 +0000 UTC" firstStartedPulling="2026-01-26 00:34:53.616045258 +0000 UTC m=+1524.775246383" lastFinishedPulling="2026-01-26 00:34:58.310713285 +0000 UTC m=+1529.469914410" observedRunningTime="2026-01-26 00:35:00.762639577 +0000 UTC m=+1531.921840722" watchObservedRunningTime="2026-01-26 00:35:00.775650825 +0000 UTC m=+1531.934851950" Jan 26 00:35:01 crc kubenswrapper[5121]: I0126 00:35:01.747243 5121 generic.go:358] "Generic (PLEG): container finished" podID="e671ad9d-925b-4143-aeba-285abb548362" containerID="54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff" exitCode=0 Jan 26 00:35:01 crc kubenswrapper[5121]: I0126 00:35:01.749545 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerDied","Data":"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff"} Jan 26 00:35:01 crc kubenswrapper[5121]: I0126 00:35:01.802240 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:35:01 crc kubenswrapper[5121]: I0126 00:35:01.802331 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:35:02 crc kubenswrapper[5121]: I0126 00:35:02.754705 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:02 crc kubenswrapper[5121]: I0126 00:35:02.755077 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:02 crc kubenswrapper[5121]: I0126 00:35:02.760920 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerStarted","Data":"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93"} Jan 26 00:35:02 crc kubenswrapper[5121]: I0126 00:35:02.788833 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tlzks" podStartSLOduration=7.124682337 podStartE2EDuration="7.788806056s" podCreationTimestamp="2026-01-26 00:34:55 +0000 UTC" firstStartedPulling="2026-01-26 00:34:59.685692388 +0000 UTC m=+1530.844893513" lastFinishedPulling="2026-01-26 00:35:00.349816097 +0000 UTC m=+1531.509017232" observedRunningTime="2026-01-26 00:35:02.784209826 +0000 UTC m=+1533.943410951" watchObservedRunningTime="2026-01-26 00:35:02.788806056 +0000 UTC m=+1533.948007181" Jan 26 00:35:02 crc kubenswrapper[5121]: I0126 00:35:02.800709 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:06 crc kubenswrapper[5121]: I0126 00:35:06.165454 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:06 crc kubenswrapper[5121]: I0126 00:35:06.166055 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:07 crc kubenswrapper[5121]: I0126 00:35:07.215989 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tlzks" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="registry-server" probeResult="failure" output=< Jan 26 00:35:07 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Jan 26 00:35:07 crc kubenswrapper[5121]: > Jan 26 00:35:13 crc kubenswrapper[5121]: I0126 00:35:13.812648 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:13 crc kubenswrapper[5121]: I0126 00:35:13.876900 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:35:13 crc kubenswrapper[5121]: I0126 00:35:13.877398 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lnczh" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="registry-server" containerID="cri-o://4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c" gracePeriod=2 Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.751168 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.778722 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities\") pod \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.779209 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xft\" (UniqueName: \"kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft\") pod \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.779465 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content\") pod \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\" (UID: \"55f133d1-5dd7-4341-8c4a-d53fa022ea72\") " Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.780007 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities" (OuterVolumeSpecName: "utilities") pod "55f133d1-5dd7-4341-8c4a-d53fa022ea72" (UID: "55f133d1-5dd7-4341-8c4a-d53fa022ea72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.793857 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft" (OuterVolumeSpecName: "kube-api-access-92xft") pod "55f133d1-5dd7-4341-8c4a-d53fa022ea72" (UID: "55f133d1-5dd7-4341-8c4a-d53fa022ea72"). InnerVolumeSpecName "kube-api-access-92xft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.828440 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55f133d1-5dd7-4341-8c4a-d53fa022ea72" (UID: "55f133d1-5dd7-4341-8c4a-d53fa022ea72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.869938 5121 generic.go:358] "Generic (PLEG): container finished" podID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerID="4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c" exitCode=0 Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.870166 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnczh" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.870141 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerDied","Data":"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c"} Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.870807 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnczh" event={"ID":"55f133d1-5dd7-4341-8c4a-d53fa022ea72","Type":"ContainerDied","Data":"100ef1e83cfb1c4fd496e5b662678b70a4f4a2a9902513fbd4d5cf27510e684b"} Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.870846 5121 scope.go:117] "RemoveContainer" containerID="4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.883246 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.883322 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-92xft\" (UniqueName: \"kubernetes.io/projected/55f133d1-5dd7-4341-8c4a-d53fa022ea72-kube-api-access-92xft\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.883338 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f133d1-5dd7-4341-8c4a-d53fa022ea72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.909558 5121 scope.go:117] "RemoveContainer" containerID="2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.916429 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.922947 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lnczh"] Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.932598 5121 scope.go:117] "RemoveContainer" containerID="0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.953772 5121 scope.go:117] "RemoveContainer" containerID="4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c" Jan 26 00:35:14 crc kubenswrapper[5121]: E0126 00:35:14.954548 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c\": container with ID starting with 4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c not found: ID does not exist" containerID="4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.954614 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c"} err="failed to get container status \"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c\": rpc error: code = NotFound desc = could not find container \"4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c\": container with ID starting with 4dde25d697cbec971a5ce7653d01b39f4a4960f68de61c6e3692e6be4e64c33c not found: ID does not exist" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.954658 5121 scope.go:117] "RemoveContainer" containerID="2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108" Jan 26 00:35:14 crc kubenswrapper[5121]: E0126 00:35:14.955635 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108\": container with ID starting with 2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108 not found: ID does not exist" containerID="2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.955655 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108"} err="failed to get container status \"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108\": rpc error: code = NotFound desc = could not find container \"2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108\": container with ID starting with 2a1b109f5e711f15434aaba84a3d60b1e041cddfcf3e4402cf74f7584dbed108 not found: ID does not exist" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.955667 5121 scope.go:117] "RemoveContainer" containerID="0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9" Jan 26 00:35:14 crc kubenswrapper[5121]: E0126 00:35:14.956318 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9\": container with ID starting with 0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9 not found: ID does not exist" containerID="0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9" Jan 26 00:35:14 crc kubenswrapper[5121]: I0126 00:35:14.956344 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9"} err="failed to get container status \"0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9\": rpc error: code = NotFound desc = could not find container \"0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9\": container with ID starting with 0f67c8a81b1cda3cc04ceb5dadd3bce027242130cc356f4771e10a9b866f0ba9 not found: ID does not exist" Jan 26 00:35:16 crc kubenswrapper[5121]: I0126 00:35:16.228078 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:16 crc kubenswrapper[5121]: I0126 00:35:16.272338 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" path="/var/lib/kubelet/pods/55f133d1-5dd7-4341-8c4a-d53fa022ea72/volumes" Jan 26 00:35:16 crc kubenswrapper[5121]: I0126 00:35:16.281961 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:17 crc kubenswrapper[5121]: I0126 00:35:17.452208 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:35:17 crc kubenswrapper[5121]: I0126 00:35:17.906582 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tlzks" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="registry-server" containerID="cri-o://44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93" gracePeriod=2 Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.304854 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.449168 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities\") pod \"e671ad9d-925b-4143-aeba-285abb548362\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.449266 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content\") pod \"e671ad9d-925b-4143-aeba-285abb548362\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.449557 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnxkw\" (UniqueName: \"kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw\") pod \"e671ad9d-925b-4143-aeba-285abb548362\" (UID: \"e671ad9d-925b-4143-aeba-285abb548362\") " Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.452551 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities" (OuterVolumeSpecName: "utilities") pod "e671ad9d-925b-4143-aeba-285abb548362" (UID: "e671ad9d-925b-4143-aeba-285abb548362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.456549 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw" (OuterVolumeSpecName: "kube-api-access-hnxkw") pod "e671ad9d-925b-4143-aeba-285abb548362" (UID: "e671ad9d-925b-4143-aeba-285abb548362"). InnerVolumeSpecName "kube-api-access-hnxkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.551163 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnxkw\" (UniqueName: \"kubernetes.io/projected/e671ad9d-925b-4143-aeba-285abb548362-kube-api-access-hnxkw\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.551198 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.565680 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e671ad9d-925b-4143-aeba-285abb548362" (UID: "e671ad9d-925b-4143-aeba-285abb548362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.653240 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e671ad9d-925b-4143-aeba-285abb548362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.916801 5121 generic.go:358] "Generic (PLEG): container finished" podID="e671ad9d-925b-4143-aeba-285abb548362" containerID="44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93" exitCode=0 Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.916948 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerDied","Data":"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93"} Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.917001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tlzks" event={"ID":"e671ad9d-925b-4143-aeba-285abb548362","Type":"ContainerDied","Data":"2224dc69a8aff61220378aed4bac0685a409dd2028c8b4e0c8b6443c11e3a1fd"} Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.917030 5121 scope.go:117] "RemoveContainer" containerID="44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.917269 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tlzks" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.941631 5121 scope.go:117] "RemoveContainer" containerID="54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.955961 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.961349 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tlzks"] Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.963860 5121 scope.go:117] "RemoveContainer" containerID="86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.988488 5121 scope.go:117] "RemoveContainer" containerID="44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93" Jan 26 00:35:18 crc kubenswrapper[5121]: E0126 00:35:18.989027 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93\": container with ID starting with 44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93 not found: ID does not exist" containerID="44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.989078 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93"} err="failed to get container status \"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93\": rpc error: code = NotFound desc = could not find container \"44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93\": container with ID starting with 44c684874369c1ac20bed3903466ffe734363c8d72e0243f4410e5230d759d93 not found: ID does not exist" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.989118 5121 scope.go:117] "RemoveContainer" containerID="54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff" Jan 26 00:35:18 crc kubenswrapper[5121]: E0126 00:35:18.989331 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff\": container with ID starting with 54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff not found: ID does not exist" containerID="54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.989362 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff"} err="failed to get container status \"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff\": rpc error: code = NotFound desc = could not find container \"54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff\": container with ID starting with 54d199be873bcea992c803f035a8124effae82799ed01af9883d83bc78d234ff not found: ID does not exist" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.989383 5121 scope.go:117] "RemoveContainer" containerID="86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc" Jan 26 00:35:18 crc kubenswrapper[5121]: E0126 00:35:18.989928 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc\": container with ID starting with 86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc not found: ID does not exist" containerID="86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc" Jan 26 00:35:18 crc kubenswrapper[5121]: I0126 00:35:18.989995 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc"} err="failed to get container status \"86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc\": rpc error: code = NotFound desc = could not find container \"86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc\": container with ID starting with 86a9db3fd22ec98a7573141f8ceb81d47bf28d016d290026dd28cfdd5ddc3ddc not found: ID does not exist" Jan 26 00:35:20 crc kubenswrapper[5121]: I0126 00:35:20.264219 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e671ad9d-925b-4143-aeba-285abb548362" path="/var/lib/kubelet/pods/e671ad9d-925b-4143-aeba-285abb548362/volumes" Jan 26 00:35:31 crc kubenswrapper[5121]: I0126 00:35:31.802035 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:35:31 crc kubenswrapper[5121]: I0126 00:35:31.802677 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:35:31 crc kubenswrapper[5121]: I0126 00:35:31.802791 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:35:31 crc kubenswrapper[5121]: I0126 00:35:31.803787 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:35:31 crc kubenswrapper[5121]: I0126 00:35:31.803860 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e" gracePeriod=600 Jan 26 00:35:32 crc kubenswrapper[5121]: I0126 00:35:32.021199 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e" exitCode=0 Jan 26 00:35:32 crc kubenswrapper[5121]: I0126 00:35:32.021313 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e"} Jan 26 00:35:32 crc kubenswrapper[5121]: I0126 00:35:32.021684 5121 scope.go:117] "RemoveContainer" containerID="b833963d85ba51f54d5d46d8a4bcffc5186b5cf7198ce48a03fd6f13859dcd53" Jan 26 00:35:33 crc kubenswrapper[5121]: I0126 00:35:33.031000 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerStarted","Data":"3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223"} Jan 26 00:35:44 crc kubenswrapper[5121]: I0126 00:35:44.046477 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-66dzp_67f7f3b0-5f2e-4242-be97-3e765a5ea9e0/control-plane-machine-set-operator/0.log" Jan 26 00:35:44 crc kubenswrapper[5121]: I0126 00:35:44.209674 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-4whj5_dfeddd81-f3cd-485c-8637-053e6d8cec00/kube-rbac-proxy/0.log" Jan 26 00:35:44 crc kubenswrapper[5121]: I0126 00:35:44.210299 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-4whj5_dfeddd81-f3cd-485c-8637-053e6d8cec00/machine-api-operator/0.log" Jan 26 00:35:57 crc kubenswrapper[5121]: I0126 00:35:57.000397 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-fl77x_ce8c85de-8ddd-4eb6-8dbd-3e42dc4031c4/cert-manager-controller/0.log" Jan 26 00:35:57 crc kubenswrapper[5121]: I0126 00:35:57.126964 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-f7672_03fe3312-4dbf-42da-bf01-4f541b24d3df/cert-manager-cainjector/0.log" Jan 26 00:35:57 crc kubenswrapper[5121]: I0126 00:35:57.193301 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-cddv6_68b12486-8042-4570-bd05-6bb6664c0a2c/cert-manager-webhook/0.log" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.145451 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489796-kwtnq"] Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146862 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146890 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146911 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146920 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146935 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146945 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="extract-utilities" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146966 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146974 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.146993 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.147001 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="extract-content" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.147027 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.147035 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.147211 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="55f133d1-5dd7-4341-8c4a-d53fa022ea72" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.147227 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e671ad9d-925b-4143-aeba-285abb548362" containerName="registry-server" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.248907 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-kwtnq"] Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.249126 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.252061 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.252238 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.252504 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.324242 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl88\" (UniqueName: \"kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88\") pod \"auto-csr-approver-29489796-kwtnq\" (UID: \"6075ee58-8e3d-4d56-8342-ec9c351f2a97\") " pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.426934 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl88\" (UniqueName: \"kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88\") pod \"auto-csr-approver-29489796-kwtnq\" (UID: \"6075ee58-8e3d-4d56-8342-ec9c351f2a97\") " pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.451253 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl88\" (UniqueName: \"kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88\") pod \"auto-csr-approver-29489796-kwtnq\" (UID: \"6075ee58-8e3d-4d56-8342-ec9c351f2a97\") " pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.572883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.854382 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489796-kwtnq"] Jan 26 00:36:00 crc kubenswrapper[5121]: I0126 00:36:00.877817 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:36:01 crc kubenswrapper[5121]: I0126 00:36:01.276719 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" event={"ID":"6075ee58-8e3d-4d56-8342-ec9c351f2a97","Type":"ContainerStarted","Data":"8a03aaeffd79e510267319b807f3021596be9bce8e9a2a1c0f9034efa868add5"} Jan 26 00:36:03 crc kubenswrapper[5121]: I0126 00:36:03.297023 5121 generic.go:358] "Generic (PLEG): container finished" podID="6075ee58-8e3d-4d56-8342-ec9c351f2a97" containerID="47f396c4ac8564303251ba67275dc2e3aa72812366d3eebcdb2716c07f6aa33a" exitCode=0 Jan 26 00:36:03 crc kubenswrapper[5121]: I0126 00:36:03.297121 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" event={"ID":"6075ee58-8e3d-4d56-8342-ec9c351f2a97","Type":"ContainerDied","Data":"47f396c4ac8564303251ba67275dc2e3aa72812366d3eebcdb2716c07f6aa33a"} Jan 26 00:36:04 crc kubenswrapper[5121]: I0126 00:36:04.736573 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:04 crc kubenswrapper[5121]: I0126 00:36:04.823856 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcl88\" (UniqueName: \"kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88\") pod \"6075ee58-8e3d-4d56-8342-ec9c351f2a97\" (UID: \"6075ee58-8e3d-4d56-8342-ec9c351f2a97\") " Jan 26 00:36:04 crc kubenswrapper[5121]: I0126 00:36:04.835960 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88" (OuterVolumeSpecName: "kube-api-access-dcl88") pod "6075ee58-8e3d-4d56-8342-ec9c351f2a97" (UID: "6075ee58-8e3d-4d56-8342-ec9c351f2a97"). InnerVolumeSpecName "kube-api-access-dcl88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:36:04 crc kubenswrapper[5121]: I0126 00:36:04.926822 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dcl88\" (UniqueName: \"kubernetes.io/projected/6075ee58-8e3d-4d56-8342-ec9c351f2a97-kube-api-access-dcl88\") on node \"crc\" DevicePath \"\"" Jan 26 00:36:05 crc kubenswrapper[5121]: I0126 00:36:05.316736 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" Jan 26 00:36:05 crc kubenswrapper[5121]: I0126 00:36:05.317052 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489796-kwtnq" event={"ID":"6075ee58-8e3d-4d56-8342-ec9c351f2a97","Type":"ContainerDied","Data":"8a03aaeffd79e510267319b807f3021596be9bce8e9a2a1c0f9034efa868add5"} Jan 26 00:36:05 crc kubenswrapper[5121]: I0126 00:36:05.317120 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a03aaeffd79e510267319b807f3021596be9bce8e9a2a1c0f9034efa868add5" Jan 26 00:36:05 crc kubenswrapper[5121]: I0126 00:36:05.836008 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-sgqt2"] Jan 26 00:36:05 crc kubenswrapper[5121]: I0126 00:36:05.840935 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489790-sgqt2"] Jan 26 00:36:06 crc kubenswrapper[5121]: I0126 00:36:06.267259 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e9d6b8-6409-4cdd-8149-ac57bb7a0db5" path="/var/lib/kubelet/pods/53e9d6b8-6409-4cdd-8149-ac57bb7a0db5/volumes" Jan 26 00:36:13 crc kubenswrapper[5121]: I0126 00:36:13.008690 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-bszz6_60174fba-616c-468e-987d-000b10781865/prometheus-operator/0.log" Jan 26 00:36:13 crc kubenswrapper[5121]: I0126 00:36:13.137514 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p_d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:13 crc kubenswrapper[5121]: I0126 00:36:13.204276 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m_1cc26fef-f6c1-40f1-a725-2d56affc8312/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:13 crc kubenswrapper[5121]: I0126 00:36:13.355500 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-6l7zp_2a6d80ea-c93d-4421-9b56-386c475b7a5d/operator/0.log" Jan 26 00:36:13 crc kubenswrapper[5121]: I0126 00:36:13.415283 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-l57mr_20c492e3-8db9-46c1-8ccf-83a6b000115e/perses-operator/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.244824 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/util/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.473588 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/util/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.498720 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/pull/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.545073 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/pull/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.713742 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/util/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.759874 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/extract/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.763190 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931anpf99_43baf954-9ecd-4111-869d-c5e885c96085/pull/0.log" Jan 26 00:36:29 crc kubenswrapper[5121]: I0126 00:36:29.942969 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.119726 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.131040 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/pull/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.136796 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/pull/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.366092 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.372531 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/extract/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.396330 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fx6lb4_2b858a05-1513-4bd9-be86-ddabf9c23169/pull/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.582555 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.746622 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/pull/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.781300 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.817750 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/pull/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.929351 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/util/0.log" Jan 26 00:36:30 crc kubenswrapper[5121]: I0126 00:36:30.952632 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/pull/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.016798 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5et6mwd_b3275426-f8e5-4f1d-9340-1d579ee79d7a/extract/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.141929 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/util/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.322511 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/pull/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.343879 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/util/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.356030 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/pull/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.547750 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/util/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.552462 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/pull/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.611100 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wg8hd_f690edc2-1dd5-4fce-81a2-4355eda9213e/extract/0.log" Jan 26 00:36:31 crc kubenswrapper[5121]: I0126 00:36:31.940708 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-utilities/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.108615 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.132133 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-utilities/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.137145 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.381361 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-utilities/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.386077 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.446393 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-utilities/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.571256 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kq592_35ee43d8-f119-418d-8f93-682a4ac716f4/registry-server/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.647999 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-utilities/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.664537 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.938982 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.940291 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-content/0.log" Jan 26 00:36:32 crc kubenswrapper[5121]: I0126 00:36:32.950500 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/extract-utilities/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.163630 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2txw7_e2dce66a-3bc6-4888-b054-5d06e1c1bef0/registry-server/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.235644 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-th95t_56a05c39-385c-44b8-be51-7d5c3df9540d/marketplace-operator/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.281354 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-utilities/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.440660 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-utilities/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.491131 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-content/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.494474 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-content/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.681518 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-utilities/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.685642 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/extract-content/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.832052 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7zv49_eb37ecc8-c468-4f1d-88b6-3b1fa517ed70/registry-server/0.log" Jan 26 00:36:33 crc kubenswrapper[5121]: I0126 00:36:33.906404 5121 scope.go:117] "RemoveContainer" containerID="1f9e11b651d1343721b5cc9e424e9c48adae4fc5c890ec0d8fff4bd679f0edbf" Jan 26 00:36:46 crc kubenswrapper[5121]: I0126 00:36:46.613528 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-bszz6_60174fba-616c-468e-987d-000b10781865/prometheus-operator/0.log" Jan 26 00:36:46 crc kubenswrapper[5121]: I0126 00:36:46.640903 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-686d5ffd76-n4g2m_1cc26fef-f6c1-40f1-a725-2d56affc8312/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:46 crc kubenswrapper[5121]: I0126 00:36:46.652014 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-686d5ffd76-dzh2p_d4ee8ff0-2fc3-438b-a3ba-3b454dafbc8a/prometheus-operator-admission-webhook/0.log" Jan 26 00:36:46 crc kubenswrapper[5121]: I0126 00:36:46.784516 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-6l7zp_2a6d80ea-c93d-4421-9b56-386c475b7a5d/operator/0.log" Jan 26 00:36:46 crc kubenswrapper[5121]: I0126 00:36:46.847631 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-l57mr_20c492e3-8db9-46c1-8ccf-83a6b000115e/perses-operator/0.log" Jan 26 00:37:35 crc kubenswrapper[5121]: I0126 00:37:35.135354 5121 generic.go:358] "Generic (PLEG): container finished" podID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerID="6c659b14738e6c553ed2ad3f521b3db2666870733af672318c32ea122db890f1" exitCode=0 Jan 26 00:37:35 crc kubenswrapper[5121]: I0126 00:37:35.135492 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" event={"ID":"c959418d-f3dc-4e83-93ba-fe643c9c9e79","Type":"ContainerDied","Data":"6c659b14738e6c553ed2ad3f521b3db2666870733af672318c32ea122db890f1"} Jan 26 00:37:35 crc kubenswrapper[5121]: I0126 00:37:35.137469 5121 scope.go:117] "RemoveContainer" containerID="6c659b14738e6c553ed2ad3f521b3db2666870733af672318c32ea122db890f1" Jan 26 00:37:35 crc kubenswrapper[5121]: I0126 00:37:35.753797 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qnhhj_must-gather-d2bp6_c959418d-f3dc-4e83-93ba-fe643c9c9e79/gather/0.log" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.795079 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.796872 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6075ee58-8e3d-4d56-8342-ec9c351f2a97" containerName="oc" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.796889 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6075ee58-8e3d-4d56-8342-ec9c351f2a97" containerName="oc" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.797017 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6075ee58-8e3d-4d56-8342-ec9c351f2a97" containerName="oc" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.808536 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.813948 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.924873 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxnx\" (UniqueName: \"kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.925004 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:41 crc kubenswrapper[5121]: I0126 00:37:41.925174 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.026941 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.027082 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.027150 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxnx\" (UniqueName: \"kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.027720 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.028114 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.033557 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qnhhj/must-gather-d2bp6"] Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.034312 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="copy" containerID="cri-o://85400845f60c667265cbde43ce739d30d443c4c3881bb32fcb0b92e1f57c6861" gracePeriod=2 Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.036291 5121 status_manager.go:895] "Failed to get status for pod" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" err="pods \"must-gather-d2bp6\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-qnhhj\": no relationship found between node 'crc' and this object" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.059475 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxnx\" (UniqueName: \"kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx\") pod \"community-operators-6lz5q\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.072302 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qnhhj/must-gather-d2bp6"] Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.137835 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.195554 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qnhhj_must-gather-d2bp6_c959418d-f3dc-4e83-93ba-fe643c9c9e79/copy/0.log" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.195966 5121 generic.go:358] "Generic (PLEG): container finished" podID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerID="85400845f60c667265cbde43ce739d30d443c4c3881bb32fcb0b92e1f57c6861" exitCode=143 Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.626237 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qnhhj_must-gather-d2bp6_c959418d-f3dc-4e83-93ba-fe643c9c9e79/copy/0.log" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.626699 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.688796 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm8mn\" (UniqueName: \"kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn\") pod \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.689250 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output\") pod \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\" (UID: \"c959418d-f3dc-4e83-93ba-fe643c9c9e79\") " Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.700163 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn" (OuterVolumeSpecName: "kube-api-access-hm8mn") pod "c959418d-f3dc-4e83-93ba-fe643c9c9e79" (UID: "c959418d-f3dc-4e83-93ba-fe643c9c9e79"). InnerVolumeSpecName "kube-api-access-hm8mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.723518 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.778236 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c959418d-f3dc-4e83-93ba-fe643c9c9e79" (UID: "c959418d-f3dc-4e83-93ba-fe643c9c9e79"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.792376 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm8mn\" (UniqueName: \"kubernetes.io/projected/c959418d-f3dc-4e83-93ba-fe643c9c9e79-kube-api-access-hm8mn\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:42 crc kubenswrapper[5121]: I0126 00:37:42.792428 5121 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c959418d-f3dc-4e83-93ba-fe643c9c9e79-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.208483 5121 generic.go:358] "Generic (PLEG): container finished" podID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerID="223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653" exitCode=0 Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.208615 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerDied","Data":"223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653"} Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.208691 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerStarted","Data":"4e6b755a02acdcefb4b722ccaf0214ecea1e490ed5584ddf98b8d2afa7978689"} Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.213546 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qnhhj_must-gather-d2bp6_c959418d-f3dc-4e83-93ba-fe643c9c9e79/copy/0.log" Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.214450 5121 scope.go:117] "RemoveContainer" containerID="85400845f60c667265cbde43ce739d30d443c4c3881bb32fcb0b92e1f57c6861" Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.214481 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qnhhj/must-gather-d2bp6" Jan 26 00:37:43 crc kubenswrapper[5121]: I0126 00:37:43.241410 5121 scope.go:117] "RemoveContainer" containerID="6c659b14738e6c553ed2ad3f521b3db2666870733af672318c32ea122db890f1" Jan 26 00:37:44 crc kubenswrapper[5121]: I0126 00:37:44.320404 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" path="/var/lib/kubelet/pods/c959418d-f3dc-4e83-93ba-fe643c9c9e79/volumes" Jan 26 00:37:45 crc kubenswrapper[5121]: I0126 00:37:45.339235 5121 generic.go:358] "Generic (PLEG): container finished" podID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerID="16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738" exitCode=0 Jan 26 00:37:45 crc kubenswrapper[5121]: I0126 00:37:45.339330 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerDied","Data":"16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738"} Jan 26 00:37:46 crc kubenswrapper[5121]: I0126 00:37:46.354933 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerStarted","Data":"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef"} Jan 26 00:37:46 crc kubenswrapper[5121]: I0126 00:37:46.385659 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6lz5q" podStartSLOduration=4.180660951 podStartE2EDuration="5.38563292s" podCreationTimestamp="2026-01-26 00:37:41 +0000 UTC" firstStartedPulling="2026-01-26 00:37:43.209702125 +0000 UTC m=+1694.368903250" lastFinishedPulling="2026-01-26 00:37:44.414674084 +0000 UTC m=+1695.573875219" observedRunningTime="2026-01-26 00:37:46.379542258 +0000 UTC m=+1697.538743393" watchObservedRunningTime="2026-01-26 00:37:46.38563292 +0000 UTC m=+1697.544834055" Jan 26 00:37:52 crc kubenswrapper[5121]: I0126 00:37:52.138810 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:52 crc kubenswrapper[5121]: I0126 00:37:52.139797 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:52 crc kubenswrapper[5121]: I0126 00:37:52.187571 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:52 crc kubenswrapper[5121]: I0126 00:37:52.622243 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:52 crc kubenswrapper[5121]: I0126 00:37:52.683072 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:54 crc kubenswrapper[5121]: I0126 00:37:54.669688 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6lz5q" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="registry-server" containerID="cri-o://f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef" gracePeriod=2 Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.498633 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.525389 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities\") pod \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.525471 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxnx\" (UniqueName: \"kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx\") pod \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.525511 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content\") pod \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\" (UID: \"1f2bf2a1-8ea9-4d70-8405-3b781faeab37\") " Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.527911 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities" (OuterVolumeSpecName: "utilities") pod "1f2bf2a1-8ea9-4d70-8405-3b781faeab37" (UID: "1f2bf2a1-8ea9-4d70-8405-3b781faeab37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.540380 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx" (OuterVolumeSpecName: "kube-api-access-4zxnx") pod "1f2bf2a1-8ea9-4d70-8405-3b781faeab37" (UID: "1f2bf2a1-8ea9-4d70-8405-3b781faeab37"). InnerVolumeSpecName "kube-api-access-4zxnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.597139 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f2bf2a1-8ea9-4d70-8405-3b781faeab37" (UID: "1f2bf2a1-8ea9-4d70-8405-3b781faeab37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.627619 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.627692 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4zxnx\" (UniqueName: \"kubernetes.io/projected/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-kube-api-access-4zxnx\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.627705 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2bf2a1-8ea9-4d70-8405-3b781faeab37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.681654 5121 generic.go:358] "Generic (PLEG): container finished" podID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerID="f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef" exitCode=0 Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.681817 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerDied","Data":"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef"} Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.681895 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lz5q" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.681926 5121 scope.go:117] "RemoveContainer" containerID="f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.681907 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lz5q" event={"ID":"1f2bf2a1-8ea9-4d70-8405-3b781faeab37","Type":"ContainerDied","Data":"4e6b755a02acdcefb4b722ccaf0214ecea1e490ed5584ddf98b8d2afa7978689"} Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.711928 5121 scope.go:117] "RemoveContainer" containerID="16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.730622 5121 scope.go:117] "RemoveContainer" containerID="223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.765184 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.772238 5121 scope.go:117] "RemoveContainer" containerID="f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef" Jan 26 00:37:55 crc kubenswrapper[5121]: E0126 00:37:55.772970 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef\": container with ID starting with f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef not found: ID does not exist" containerID="f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.773081 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef"} err="failed to get container status \"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef\": rpc error: code = NotFound desc = could not find container \"f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef\": container with ID starting with f0adf4bb52cdbab852a2bfaff74d6846084d3250f920af635f89a2e0ead1cdef not found: ID does not exist" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.773173 5121 scope.go:117] "RemoveContainer" containerID="16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738" Jan 26 00:37:55 crc kubenswrapper[5121]: E0126 00:37:55.773546 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738\": container with ID starting with 16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738 not found: ID does not exist" containerID="16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.773644 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738"} err="failed to get container status \"16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738\": rpc error: code = NotFound desc = could not find container \"16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738\": container with ID starting with 16aa813b483d6d37bff948b065268e76a3e9c3f7dd356fce081fe7fe2aab7738 not found: ID does not exist" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.773722 5121 scope.go:117] "RemoveContainer" containerID="223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653" Jan 26 00:37:55 crc kubenswrapper[5121]: E0126 00:37:55.773989 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653\": container with ID starting with 223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653 not found: ID does not exist" containerID="223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.774110 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653"} err="failed to get container status \"223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653\": rpc error: code = NotFound desc = could not find container \"223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653\": container with ID starting with 223c7683e8293359ea54ced35835dcc0f2b7375f6d599dc83892ee74ac1e2653 not found: ID does not exist" Jan 26 00:37:55 crc kubenswrapper[5121]: I0126 00:37:55.776263 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6lz5q"] Jan 26 00:37:56 crc kubenswrapper[5121]: I0126 00:37:56.265519 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" path="/var/lib/kubelet/pods/1f2bf2a1-8ea9-4d70-8405-3b781faeab37/volumes" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.148390 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489798-9wprp"] Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150218 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150241 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150281 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="extract-utilities" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150291 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="extract-utilities" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150301 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="extract-content" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150311 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="extract-content" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150324 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150331 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150347 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="registry-server" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150354 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="registry-server" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150494 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f2bf2a1-8ea9-4d70-8405-3b781faeab37" containerName="registry-server" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150510 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="gather" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.150520 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c959418d-f3dc-4e83-93ba-fe643c9c9e79" containerName="copy" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.180193 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489798-9wprp"] Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.180482 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.184430 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.184475 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.185103 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.192809 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvhz\" (UniqueName: \"kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz\") pod \"auto-csr-approver-29489798-9wprp\" (UID: \"018a7977-e573-4985-bd3e-f7d809826b59\") " pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.297304 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvhz\" (UniqueName: \"kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz\") pod \"auto-csr-approver-29489798-9wprp\" (UID: \"018a7977-e573-4985-bd3e-f7d809826b59\") " pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.324050 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvhz\" (UniqueName: \"kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz\") pod \"auto-csr-approver-29489798-9wprp\" (UID: \"018a7977-e573-4985-bd3e-f7d809826b59\") " pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.500650 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:00 crc kubenswrapper[5121]: I0126 00:38:00.726677 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489798-9wprp"] Jan 26 00:38:01 crc kubenswrapper[5121]: I0126 00:38:01.742100 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-9wprp" event={"ID":"018a7977-e573-4985-bd3e-f7d809826b59","Type":"ContainerStarted","Data":"fb498e809de3ac03c3e6eb9d35884c9b4cb43d5012c7cbcb6545a49c8cebd597"} Jan 26 00:38:01 crc kubenswrapper[5121]: I0126 00:38:01.802222 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:38:01 crc kubenswrapper[5121]: I0126 00:38:01.802317 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:38:02 crc kubenswrapper[5121]: I0126 00:38:02.755481 5121 generic.go:358] "Generic (PLEG): container finished" podID="018a7977-e573-4985-bd3e-f7d809826b59" containerID="4326786710c3233d3b0dda66a055a7812fa87fbf7b25791df1977b74912dc3a3" exitCode=0 Jan 26 00:38:02 crc kubenswrapper[5121]: I0126 00:38:02.755613 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-9wprp" event={"ID":"018a7977-e573-4985-bd3e-f7d809826b59","Type":"ContainerDied","Data":"4326786710c3233d3b0dda66a055a7812fa87fbf7b25791df1977b74912dc3a3"} Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.412009 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.490598 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvhz\" (UniqueName: \"kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz\") pod \"018a7977-e573-4985-bd3e-f7d809826b59\" (UID: \"018a7977-e573-4985-bd3e-f7d809826b59\") " Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.500036 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz" (OuterVolumeSpecName: "kube-api-access-ckvhz") pod "018a7977-e573-4985-bd3e-f7d809826b59" (UID: "018a7977-e573-4985-bd3e-f7d809826b59"). InnerVolumeSpecName "kube-api-access-ckvhz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.593202 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ckvhz\" (UniqueName: \"kubernetes.io/projected/018a7977-e573-4985-bd3e-f7d809826b59-kube-api-access-ckvhz\") on node \"crc\" DevicePath \"\"" Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.778936 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489798-9wprp" Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.778988 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489798-9wprp" event={"ID":"018a7977-e573-4985-bd3e-f7d809826b59","Type":"ContainerDied","Data":"fb498e809de3ac03c3e6eb9d35884c9b4cb43d5012c7cbcb6545a49c8cebd597"} Jan 26 00:38:04 crc kubenswrapper[5121]: I0126 00:38:04.779993 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb498e809de3ac03c3e6eb9d35884c9b4cb43d5012c7cbcb6545a49c8cebd597" Jan 26 00:38:05 crc kubenswrapper[5121]: I0126 00:38:05.496837 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-qcdfp"] Jan 26 00:38:05 crc kubenswrapper[5121]: I0126 00:38:05.503411 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489792-qcdfp"] Jan 26 00:38:06 crc kubenswrapper[5121]: I0126 00:38:06.327936 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a141d6ed-0de3-4599-85bc-881fafe98e8f" path="/var/lib/kubelet/pods/a141d6ed-0de3-4599-85bc-881fafe98e8f/volumes" Jan 26 00:38:31 crc kubenswrapper[5121]: I0126 00:38:31.802024 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:38:31 crc kubenswrapper[5121]: I0126 00:38:31.803126 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:38:34 crc kubenswrapper[5121]: I0126 00:38:34.095503 5121 scope.go:117] "RemoveContainer" containerID="fe3bc77ad2e30cd08c54bdd775bf8ef6e5e28855d49dc3f97c19bb22b1f3415a" Jan 26 00:39:01 crc kubenswrapper[5121]: I0126 00:39:01.802668 5121 patch_prober.go:28] interesting pod/machine-config-daemon-9w6w9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:39:01 crc kubenswrapper[5121]: I0126 00:39:01.804083 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:39:01 crc kubenswrapper[5121]: I0126 00:39:01.804552 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" Jan 26 00:39:01 crc kubenswrapper[5121]: I0126 00:39:01.805407 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223"} pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:39:01 crc kubenswrapper[5121]: I0126 00:39:01.805493 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" containerName="machine-config-daemon" containerID="cri-o://3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" gracePeriod=600 Jan 26 00:39:01 crc kubenswrapper[5121]: E0126 00:39:01.970119 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:39:02 crc kubenswrapper[5121]: I0126 00:39:02.281417 5121 generic.go:358] "Generic (PLEG): container finished" podID="62eaac02-ed09-4860-b496-07239e103d8d" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" exitCode=0 Jan 26 00:39:02 crc kubenswrapper[5121]: I0126 00:39:02.281516 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" event={"ID":"62eaac02-ed09-4860-b496-07239e103d8d","Type":"ContainerDied","Data":"3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223"} Jan 26 00:39:02 crc kubenswrapper[5121]: I0126 00:39:02.281790 5121 scope.go:117] "RemoveContainer" containerID="4c79cfd07312245deb96f45bff9db59875473423a89e6a0595e5a37fdb4ea55e" Jan 26 00:39:02 crc kubenswrapper[5121]: I0126 00:39:02.282713 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:39:02 crc kubenswrapper[5121]: E0126 00:39:02.283279 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:39:15 crc kubenswrapper[5121]: I0126 00:39:15.255482 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:39:15 crc kubenswrapper[5121]: E0126 00:39:15.256272 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:39:29 crc kubenswrapper[5121]: I0126 00:39:29.256615 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:39:29 crc kubenswrapper[5121]: E0126 00:39:29.258321 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.335196 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.338425 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.339060 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_b497883f-da14-4bfe-8e19-bba4b32b7f79/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.340212 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_39a99d00-3116-42fa-95af-f93382aa1930/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.340557 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.341928 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_15fe1e64-e056-4d07-97a6-d19ad38afe03/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.343408 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.344289 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_82be670b-4a27-4319-8431-ac1b86d3fc1a/docker-build/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.402693 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.402842 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-54c688565-9rgbz_069690ff-331e-4ee8-bed5-24d79f939a40/machine-approver-controller/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.411655 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.412057 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhg6w_21d6bae8-c026-4b2f-9127-ca53977e50d8/kube-multus/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.413121 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.413640 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dgvkt_fc4541ce-7789-4670-bc75-5c2868e52ce0/approver/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.416496 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:39:33 crc kubenswrapper[5121]: I0126 00:39:33.416590 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 26 00:39:41 crc kubenswrapper[5121]: I0126 00:39:41.257161 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:39:41 crc kubenswrapper[5121]: E0126 00:39:41.258603 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:39:55 crc kubenswrapper[5121]: I0126 00:39:55.257221 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:39:55 crc kubenswrapper[5121]: E0126 00:39:55.258553 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.138481 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29489800-zptnv"] Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.139887 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="018a7977-e573-4985-bd3e-f7d809826b59" containerName="oc" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.140200 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="018a7977-e573-4985-bd3e-f7d809826b59" containerName="oc" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.140360 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="018a7977-e573-4985-bd3e-f7d809826b59" containerName="oc" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.149366 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.155248 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.155566 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.155680 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g8w6q\"" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.157865 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489800-zptnv"] Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.167407 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt5hz\" (UniqueName: \"kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz\") pod \"auto-csr-approver-29489800-zptnv\" (UID: \"e89e90ec-a7a9-49cc-8c65-344f888caef0\") " pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.269103 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rt5hz\" (UniqueName: \"kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz\") pod \"auto-csr-approver-29489800-zptnv\" (UID: \"e89e90ec-a7a9-49cc-8c65-344f888caef0\") " pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.291595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt5hz\" (UniqueName: \"kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz\") pod \"auto-csr-approver-29489800-zptnv\" (UID: \"e89e90ec-a7a9-49cc-8c65-344f888caef0\") " pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.493749 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:00 crc kubenswrapper[5121]: I0126 00:40:00.735208 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29489800-zptnv"] Jan 26 00:40:01 crc kubenswrapper[5121]: I0126 00:40:01.034676 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489800-zptnv" event={"ID":"e89e90ec-a7a9-49cc-8c65-344f888caef0","Type":"ContainerStarted","Data":"9ac1971ceeb4a6c8e98cbcca5165b603b1c275d5eb35a46eafd7c74be95be40f"} Jan 26 00:40:03 crc kubenswrapper[5121]: I0126 00:40:03.054479 5121 generic.go:358] "Generic (PLEG): container finished" podID="e89e90ec-a7a9-49cc-8c65-344f888caef0" containerID="628672c260ef0782314740ab14e738f222928625222ce80537faa6c5d5462aa6" exitCode=0 Jan 26 00:40:03 crc kubenswrapper[5121]: I0126 00:40:03.054586 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489800-zptnv" event={"ID":"e89e90ec-a7a9-49cc-8c65-344f888caef0","Type":"ContainerDied","Data":"628672c260ef0782314740ab14e738f222928625222ce80537faa6c5d5462aa6"} Jan 26 00:40:04 crc kubenswrapper[5121]: I0126 00:40:04.328302 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:04 crc kubenswrapper[5121]: I0126 00:40:04.469242 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt5hz\" (UniqueName: \"kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz\") pod \"e89e90ec-a7a9-49cc-8c65-344f888caef0\" (UID: \"e89e90ec-a7a9-49cc-8c65-344f888caef0\") " Jan 26 00:40:04 crc kubenswrapper[5121]: I0126 00:40:04.476531 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz" (OuterVolumeSpecName: "kube-api-access-rt5hz") pod "e89e90ec-a7a9-49cc-8c65-344f888caef0" (UID: "e89e90ec-a7a9-49cc-8c65-344f888caef0"). InnerVolumeSpecName "kube-api-access-rt5hz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 26 00:40:04 crc kubenswrapper[5121]: I0126 00:40:04.571586 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt5hz\" (UniqueName: \"kubernetes.io/projected/e89e90ec-a7a9-49cc-8c65-344f888caef0-kube-api-access-rt5hz\") on node \"crc\" DevicePath \"\"" Jan 26 00:40:05 crc kubenswrapper[5121]: I0126 00:40:05.073135 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29489800-zptnv" Jan 26 00:40:05 crc kubenswrapper[5121]: I0126 00:40:05.073136 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29489800-zptnv" event={"ID":"e89e90ec-a7a9-49cc-8c65-344f888caef0","Type":"ContainerDied","Data":"9ac1971ceeb4a6c8e98cbcca5165b603b1c275d5eb35a46eafd7c74be95be40f"} Jan 26 00:40:05 crc kubenswrapper[5121]: I0126 00:40:05.073211 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac1971ceeb4a6c8e98cbcca5165b603b1c275d5eb35a46eafd7c74be95be40f" Jan 26 00:40:05 crc kubenswrapper[5121]: I0126 00:40:05.411330 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-7vv9p"] Jan 26 00:40:05 crc kubenswrapper[5121]: I0126 00:40:05.419652 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29489794-7vv9p"] Jan 26 00:40:06 crc kubenswrapper[5121]: I0126 00:40:06.268809 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a171d695-ffb7-47b1-9c43-0800ab8d9c59" path="/var/lib/kubelet/pods/a171d695-ffb7-47b1-9c43-0800ab8d9c59/volumes" Jan 26 00:40:08 crc kubenswrapper[5121]: I0126 00:40:08.256454 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:40:08 crc kubenswrapper[5121]: E0126 00:40:08.256939 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:40:21 crc kubenswrapper[5121]: I0126 00:40:21.256288 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:40:21 crc kubenswrapper[5121]: E0126 00:40:21.257398 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:40:34 crc kubenswrapper[5121]: I0126 00:40:34.267481 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:40:34 crc kubenswrapper[5121]: E0126 00:40:34.268419 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:40:34 crc kubenswrapper[5121]: I0126 00:40:34.315480 5121 scope.go:117] "RemoveContainer" containerID="768ffd361758d3df5cfac75c558da6538fff7a45adfe432a29f23c07a8d81951" Jan 26 00:40:47 crc kubenswrapper[5121]: I0126 00:40:47.257204 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:40:47 crc kubenswrapper[5121]: E0126 00:40:47.258756 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d" Jan 26 00:40:59 crc kubenswrapper[5121]: I0126 00:40:59.256255 5121 scope.go:117] "RemoveContainer" containerID="3d9f769c1b5e9814d60206b8fd0f73bd87071fb1052c723a16786bbb81b4f223" Jan 26 00:40:59 crc kubenswrapper[5121]: E0126 00:40:59.259164 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9w6w9_openshift-machine-config-operator(62eaac02-ed09-4860-b496-07239e103d8d)\"" pod="openshift-machine-config-operator/machine-config-daemon-9w6w9" podUID="62eaac02-ed09-4860-b496-07239e103d8d"